00:00:00.001 Started by upstream project "autotest-per-patch" build number 132856 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.106 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.107 The recommended git tool is: git 00:00:00.108 using credential 00000000-0000-0000-0000-000000000002 00:00:00.109 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.160 Fetching changes from the remote Git repository 00:00:00.162 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.208 Using shallow fetch with depth 1 00:00:00.208 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.208 > git --version # timeout=10 00:00:00.240 > git --version # 'git version 2.39.2' 00:00:00.241 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.262 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.262 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.844 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.856 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.868 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.868 > git config core.sparsecheckout # timeout=10 00:00:06.880 > git read-tree -mu HEAD # timeout=10 00:00:06.896 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.914 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.914 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:07.004 [Pipeline] Start of Pipeline 00:00:07.013 [Pipeline] library 00:00:07.014 Loading library shm_lib@master 00:00:07.014 Library shm_lib@master is cached. Copying from home. 00:00:07.028 [Pipeline] node 00:00:07.039 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:07.040 [Pipeline] { 00:00:07.047 [Pipeline] catchError 00:00:07.048 [Pipeline] { 00:00:07.056 [Pipeline] wrap 00:00:07.063 [Pipeline] { 00:00:07.068 [Pipeline] stage 00:00:07.069 [Pipeline] { (Prologue) 00:00:07.081 [Pipeline] echo 00:00:07.082 Node: VM-host-SM38 00:00:07.086 [Pipeline] cleanWs 00:00:07.095 [WS-CLEANUP] Deleting project workspace... 00:00:07.095 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.101 [WS-CLEANUP] done 00:00:07.290 [Pipeline] setCustomBuildProperty 00:00:07.352 [Pipeline] httpRequest 00:00:07.655 [Pipeline] echo 00:00:07.656 Sorcerer 10.211.164.20 is alive 00:00:07.665 [Pipeline] retry 00:00:07.667 [Pipeline] { 00:00:07.680 [Pipeline] httpRequest 00:00:07.683 HttpMethod: GET 00:00:07.684 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.684 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.686 Response Code: HTTP/1.1 200 OK 00:00:07.686 Success: Status code 200 is in the accepted range: 200,404 00:00:07.687 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.552 [Pipeline] } 00:00:08.571 [Pipeline] // retry 00:00:08.580 [Pipeline] sh 00:00:08.870 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.888 [Pipeline] httpRequest 00:00:09.678 [Pipeline] echo 00:00:09.680 Sorcerer 10.211.164.20 is alive 00:00:09.689 [Pipeline] retry 00:00:09.691 [Pipeline] { 00:00:09.705 [Pipeline] httpRequest 00:00:09.710 HttpMethod: GET 00:00:09.710 URL: http://10.211.164.20/packages/spdk_dc2db840545ac9f14f59d2ca3a5329a54fa67a95.tar.gz 00:00:09.711 Sending request to url: http://10.211.164.20/packages/spdk_dc2db840545ac9f14f59d2ca3a5329a54fa67a95.tar.gz 00:00:09.713 Response Code: HTTP/1.1 200 OK 00:00:09.713 Success: Status code 200 is in the accepted range: 200,404 00:00:09.714 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_dc2db840545ac9f14f59d2ca3a5329a54fa67a95.tar.gz 00:00:46.511 [Pipeline] } 00:00:46.529 [Pipeline] // retry 00:00:46.536 [Pipeline] sh 00:00:46.825 + tar --no-same-owner -xf spdk_dc2db840545ac9f14f59d2ca3a5329a54fa67a95.tar.gz 00:00:50.151 [Pipeline] sh 00:00:50.439 + git -C spdk log --oneline -n5 00:00:50.439 dc2db8405 bdev/nvme: bdev nvme delete public api 00:00:50.439 e01cb43b8 mk/spdk.common.mk sed the minor version 00:00:50.439 d58eef2a2 nvme/rdma: Fix reinserting qpair in connecting list after stale state 00:00:50.439 2104eacf0 test/check_so_deps: use VERSION to look for prior tags 00:00:50.439 66289a6db build: use VERSION file for storing version 00:00:50.458 [Pipeline] writeFile 00:00:50.473 [Pipeline] sh 00:00:50.761 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:50.775 [Pipeline] sh 00:00:51.064 + cat autorun-spdk.conf 00:00:51.064 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:51.064 SPDK_TEST_NVME=1 00:00:51.064 SPDK_TEST_FTL=1 00:00:51.064 SPDK_TEST_ISAL=1 00:00:51.064 SPDK_RUN_ASAN=1 00:00:51.064 SPDK_RUN_UBSAN=1 00:00:51.064 SPDK_TEST_XNVME=1 00:00:51.064 SPDK_TEST_NVME_FDP=1 00:00:51.064 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:51.073 RUN_NIGHTLY=0 00:00:51.077 [Pipeline] } 00:00:51.086 [Pipeline] // stage 00:00:51.095 [Pipeline] stage 00:00:51.097 [Pipeline] { (Run VM) 00:00:51.105 [Pipeline] sh 00:00:51.413 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:51.413 + echo 'Start stage prepare_nvme.sh' 00:00:51.413 Start stage prepare_nvme.sh 00:00:51.413 + [[ -n 3 ]] 00:00:51.413 + disk_prefix=ex3 00:00:51.413 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:00:51.413 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:00:51.413 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:00:51.413 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:51.413 ++ SPDK_TEST_NVME=1 00:00:51.413 ++ SPDK_TEST_FTL=1 00:00:51.413 ++ SPDK_TEST_ISAL=1 00:00:51.413 ++ SPDK_RUN_ASAN=1 00:00:51.413 ++ SPDK_RUN_UBSAN=1 00:00:51.413 ++ SPDK_TEST_XNVME=1 00:00:51.413 ++ SPDK_TEST_NVME_FDP=1 00:00:51.413 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:51.413 ++ RUN_NIGHTLY=0 00:00:51.413 + cd /var/jenkins/workspace/nvme-vg-autotest 00:00:51.413 + nvme_files=() 00:00:51.413 + declare -A nvme_files 00:00:51.413 + backend_dir=/var/lib/libvirt/images/backends 00:00:51.413 + nvme_files['nvme.img']=5G 00:00:51.413 + nvme_files['nvme-cmb.img']=5G 00:00:51.413 + nvme_files['nvme-multi0.img']=4G 00:00:51.413 + nvme_files['nvme-multi1.img']=4G 00:00:51.413 + nvme_files['nvme-multi2.img']=4G 00:00:51.413 + nvme_files['nvme-openstack.img']=8G 00:00:51.413 + nvme_files['nvme-zns.img']=5G 00:00:51.413 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:51.413 + (( SPDK_TEST_FTL == 1 )) 00:00:51.413 + nvme_files["nvme-ftl.img"]=6G 00:00:51.413 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:51.413 + nvme_files["nvme-fdp.img"]=1G 00:00:51.413 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:51.413 + for nvme in "${!nvme_files[@]}" 00:00:51.413 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi2.img -s 4G 00:00:51.413 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:51.413 + for nvme in "${!nvme_files[@]}" 00:00:51.413 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-ftl.img -s 6G 00:00:51.988 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:00:51.988 + for nvme in "${!nvme_files[@]}" 00:00:51.988 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-cmb.img -s 5G 00:00:51.988 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:51.988 + for nvme in "${!nvme_files[@]}" 00:00:51.988 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-openstack.img -s 8G 00:00:52.250 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:52.250 + for nvme in "${!nvme_files[@]}" 00:00:52.250 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-zns.img -s 5G 00:00:52.824 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:52.824 + for nvme in "${!nvme_files[@]}" 00:00:52.824 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi1.img -s 4G 00:00:52.824 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:52.824 + for nvme in "${!nvme_files[@]}" 00:00:52.824 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi0.img -s 4G 00:00:52.824 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:52.824 + for nvme in "${!nvme_files[@]}" 00:00:52.824 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-fdp.img -s 1G 00:00:53.086 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:00:53.086 + for nvme in "${!nvme_files[@]}" 00:00:53.086 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme.img -s 5G 00:00:53.348 Formatting '/var/lib/libvirt/images/backends/ex3-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:53.348 ++ sudo grep -rl ex3-nvme.img /etc/libvirt/qemu 00:00:53.348 + echo 'End stage prepare_nvme.sh' 00:00:53.348 End stage prepare_nvme.sh 00:00:53.362 [Pipeline] sh 00:00:53.647 + DISTRO=fedora39 00:00:53.647 + CPUS=10 00:00:53.647 + RAM=12288 00:00:53.647 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:53.647 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex3-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex3-nvme.img -b /var/lib/libvirt/images/backends/ex3-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex3-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:00:53.647 00:00:53.647 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:00:53.647 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:00:53.647 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:00:53.647 HELP=0 00:00:53.647 DRY_RUN=0 00:00:53.647 NVME_FILE=/var/lib/libvirt/images/backends/ex3-nvme-ftl.img,/var/lib/libvirt/images/backends/ex3-nvme.img,/var/lib/libvirt/images/backends/ex3-nvme-multi0.img,/var/lib/libvirt/images/backends/ex3-nvme-fdp.img, 00:00:53.647 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:00:53.647 NVME_AUTO_CREATE=0 00:00:53.647 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img,, 00:00:53.647 NVME_CMB=,,,, 00:00:53.647 NVME_PMR=,,,, 00:00:53.647 NVME_ZNS=,,,, 00:00:53.647 NVME_MS=true,,,, 00:00:53.647 NVME_FDP=,,,on, 00:00:53.647 SPDK_VAGRANT_DISTRO=fedora39 00:00:53.647 SPDK_VAGRANT_VMCPU=10 00:00:53.647 SPDK_VAGRANT_VMRAM=12288 00:00:53.647 SPDK_VAGRANT_PROVIDER=libvirt 00:00:53.647 SPDK_VAGRANT_HTTP_PROXY= 00:00:53.647 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:53.647 SPDK_OPENSTACK_NETWORK=0 00:00:53.647 VAGRANT_PACKAGE_BOX=0 00:00:53.647 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:53.647 FORCE_DISTRO=true 00:00:53.647 VAGRANT_BOX_VERSION= 00:00:53.647 EXTRA_VAGRANTFILES= 00:00:53.647 NIC_MODEL=e1000 00:00:53.647 00:00:53.647 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:00:53.647 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:00:56.197 Bringing machine 'default' up with 'libvirt' provider... 00:00:56.197 ==> default: Creating image (snapshot of base box volume). 00:00:56.456 ==> default: Creating domain with the following settings... 00:00:56.456 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1734034300_3b3cb092f87e16d80348 00:00:56.456 ==> default: -- Domain type: kvm 00:00:56.456 ==> default: -- Cpus: 10 00:00:56.456 ==> default: -- Feature: acpi 00:00:56.456 ==> default: -- Feature: apic 00:00:56.456 ==> default: -- Feature: pae 00:00:56.456 ==> default: -- Memory: 12288M 00:00:56.456 ==> default: -- Memory Backing: hugepages: 00:00:56.456 ==> default: -- Management MAC: 00:00:56.456 ==> default: -- Loader: 00:00:56.456 ==> default: -- Nvram: 00:00:56.456 ==> default: -- Base box: spdk/fedora39 00:00:56.456 ==> default: -- Storage pool: default 00:00:56.456 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1734034300_3b3cb092f87e16d80348.img (20G) 00:00:56.456 ==> default: -- Volume Cache: default 00:00:56.456 ==> default: -- Kernel: 00:00:56.456 ==> default: -- Initrd: 00:00:56.456 ==> default: -- Graphics Type: vnc 00:00:56.456 ==> default: -- Graphics Port: -1 00:00:56.456 ==> default: -- Graphics IP: 127.0.0.1 00:00:56.456 ==> default: -- Graphics Password: Not defined 00:00:56.456 ==> default: -- Video Type: cirrus 00:00:56.456 ==> default: -- Video VRAM: 9216 00:00:56.456 ==> default: -- Sound Type: 00:00:56.456 ==> default: -- Keymap: en-us 00:00:56.456 ==> default: -- TPM Path: 00:00:56.456 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:56.456 ==> default: -- Command line args: 00:00:56.456 ==> default: -> value=-device, 00:00:56.456 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:56.456 ==> default: -> value=-drive, 00:00:56.456 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:00:56.456 ==> default: -> value=-device, 00:00:56.456 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:00:56.456 ==> default: -> value=-device, 00:00:56.456 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:56.456 ==> default: -> value=-drive, 00:00:56.456 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme.img,if=none,id=nvme-1-drive0, 00:00:56.456 ==> default: -> value=-device, 00:00:56.456 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:56.456 ==> default: -> value=-device, 00:00:56.456 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:00:56.456 ==> default: -> value=-drive, 00:00:56.456 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:00:56.456 ==> default: -> value=-device, 00:00:56.456 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:56.456 ==> default: -> value=-drive, 00:00:56.456 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:00:56.456 ==> default: -> value=-device, 00:00:56.456 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:56.456 ==> default: -> value=-drive, 00:00:56.456 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:00:56.456 ==> default: -> value=-device, 00:00:56.456 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:56.456 ==> default: -> value=-device, 00:00:56.456 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:00:56.456 ==> default: -> value=-device, 00:00:56.456 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:00:56.456 ==> default: -> value=-drive, 00:00:56.456 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:00:56.456 ==> default: -> value=-device, 00:00:56.456 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:56.456 ==> default: Creating shared folders metadata... 00:00:56.456 ==> default: Starting domain. 00:00:57.390 ==> default: Waiting for domain to get an IP address... 00:01:12.279 ==> default: Waiting for SSH to become available... 00:01:12.279 ==> default: Configuring and enabling network interfaces... 00:01:16.488 default: SSH address: 192.168.121.91:22 00:01:16.488 default: SSH username: vagrant 00:01:16.488 default: SSH auth method: private key 00:01:18.425 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:26.571 ==> default: Mounting SSHFS shared folder... 00:01:27.544 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:27.544 ==> default: Checking Mount.. 00:01:28.929 ==> default: Folder Successfully Mounted! 00:01:28.929 00:01:28.929 SUCCESS! 00:01:28.929 00:01:28.929 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:28.929 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:28.929 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:28.929 00:01:28.939 [Pipeline] } 00:01:28.954 [Pipeline] // stage 00:01:28.963 [Pipeline] dir 00:01:28.963 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:01:28.965 [Pipeline] { 00:01:28.977 [Pipeline] catchError 00:01:28.979 [Pipeline] { 00:01:28.992 [Pipeline] sh 00:01:29.275 + vagrant ssh-config --host vagrant 00:01:29.275 + sed -ne '/^Host/,$p' 00:01:29.275 + tee ssh_conf 00:01:31.820 Host vagrant 00:01:31.820 HostName 192.168.121.91 00:01:31.820 User vagrant 00:01:31.820 Port 22 00:01:31.820 UserKnownHostsFile /dev/null 00:01:31.821 StrictHostKeyChecking no 00:01:31.821 PasswordAuthentication no 00:01:31.821 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:31.821 IdentitiesOnly yes 00:01:31.821 LogLevel FATAL 00:01:31.821 ForwardAgent yes 00:01:31.821 ForwardX11 yes 00:01:31.821 00:01:31.836 [Pipeline] withEnv 00:01:31.838 [Pipeline] { 00:01:31.850 [Pipeline] sh 00:01:32.134 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:01:32.134 source /etc/os-release 00:01:32.134 [[ -e /image.version ]] && img=$(< /image.version) 00:01:32.134 # Minimal, systemd-like check. 00:01:32.134 if [[ -e /.dockerenv ]]; then 00:01:32.134 # Clear garbage from the node'\''s name: 00:01:32.134 # agt-er_autotest_547-896 -> autotest_547-896 00:01:32.134 # $HOSTNAME is the actual container id 00:01:32.134 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:32.134 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:32.134 # We can assume this is a mount from a host where container is running, 00:01:32.134 # so fetch its hostname to easily identify the target swarm worker. 00:01:32.134 container="$(< /etc/hostname) ($agent)" 00:01:32.134 else 00:01:32.134 # Fallback 00:01:32.134 container=$agent 00:01:32.134 fi 00:01:32.134 fi 00:01:32.134 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:32.134 ' 00:01:32.409 [Pipeline] } 00:01:32.425 [Pipeline] // withEnv 00:01:32.433 [Pipeline] setCustomBuildProperty 00:01:32.448 [Pipeline] stage 00:01:32.450 [Pipeline] { (Tests) 00:01:32.466 [Pipeline] sh 00:01:32.750 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:33.027 [Pipeline] sh 00:01:33.312 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:33.590 [Pipeline] timeout 00:01:33.590 Timeout set to expire in 50 min 00:01:33.592 [Pipeline] { 00:01:33.606 [Pipeline] sh 00:01:33.889 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:01:34.461 HEAD is now at dc2db8405 bdev/nvme: bdev nvme delete public api 00:01:34.474 [Pipeline] sh 00:01:34.758 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:01:35.034 [Pipeline] sh 00:01:35.320 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:35.597 [Pipeline] sh 00:01:35.942 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:01:36.203 ++ readlink -f spdk_repo 00:01:36.203 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:36.203 + [[ -n /home/vagrant/spdk_repo ]] 00:01:36.203 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:36.203 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:36.203 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:36.203 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:36.203 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:36.203 + [[ nvme-vg-autotest == pkgdep-* ]] 00:01:36.203 + cd /home/vagrant/spdk_repo 00:01:36.203 + source /etc/os-release 00:01:36.203 ++ NAME='Fedora Linux' 00:01:36.203 ++ VERSION='39 (Cloud Edition)' 00:01:36.203 ++ ID=fedora 00:01:36.203 ++ VERSION_ID=39 00:01:36.203 ++ VERSION_CODENAME= 00:01:36.203 ++ PLATFORM_ID=platform:f39 00:01:36.203 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:36.203 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:36.203 ++ LOGO=fedora-logo-icon 00:01:36.203 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:36.203 ++ HOME_URL=https://fedoraproject.org/ 00:01:36.203 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:36.203 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:36.203 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:36.203 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:36.203 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:36.203 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:36.203 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:36.203 ++ SUPPORT_END=2024-11-12 00:01:36.203 ++ VARIANT='Cloud Edition' 00:01:36.203 ++ VARIANT_ID=cloud 00:01:36.203 + uname -a 00:01:36.203 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:36.203 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:36.464 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:36.725 Hugepages 00:01:36.725 node hugesize free / total 00:01:36.725 node0 1048576kB 0 / 0 00:01:36.725 node0 2048kB 0 / 0 00:01:36.725 00:01:36.725 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:36.986 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:36.986 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:36.986 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:01:36.986 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme3 nvme3n1 nvme3n2 nvme3n3 00:01:36.986 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:01:36.986 + rm -f /tmp/spdk-ld-path 00:01:36.986 + source autorun-spdk.conf 00:01:36.986 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:36.986 ++ SPDK_TEST_NVME=1 00:01:36.986 ++ SPDK_TEST_FTL=1 00:01:36.986 ++ SPDK_TEST_ISAL=1 00:01:36.986 ++ SPDK_RUN_ASAN=1 00:01:36.986 ++ SPDK_RUN_UBSAN=1 00:01:36.986 ++ SPDK_TEST_XNVME=1 00:01:36.986 ++ SPDK_TEST_NVME_FDP=1 00:01:36.986 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:36.986 ++ RUN_NIGHTLY=0 00:01:36.986 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:36.986 + [[ -n '' ]] 00:01:36.986 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:36.986 + for M in /var/spdk/build-*-manifest.txt 00:01:36.986 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:36.986 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:36.986 + for M in /var/spdk/build-*-manifest.txt 00:01:36.986 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:36.986 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:36.986 + for M in /var/spdk/build-*-manifest.txt 00:01:36.986 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:36.986 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:36.986 ++ uname 00:01:36.986 + [[ Linux == \L\i\n\u\x ]] 00:01:36.986 + sudo dmesg -T 00:01:36.986 + sudo dmesg --clear 00:01:36.986 + dmesg_pid=5034 00:01:36.986 + [[ Fedora Linux == FreeBSD ]] 00:01:36.986 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:36.986 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:36.986 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:36.986 + [[ -x /usr/src/fio-static/fio ]] 00:01:36.986 + sudo dmesg -Tw 00:01:36.986 + export FIO_BIN=/usr/src/fio-static/fio 00:01:36.986 + FIO_BIN=/usr/src/fio-static/fio 00:01:36.986 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:36.986 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:36.986 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:36.986 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:36.986 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:36.986 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:36.986 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:36.986 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:36.986 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:37.247 20:12:21 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:37.247 20:12:21 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:37.247 20:12:21 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:37.247 20:12:21 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:01:37.247 20:12:21 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:01:37.247 20:12:21 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:01:37.247 20:12:21 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:01:37.247 20:12:21 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:01:37.247 20:12:21 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:01:37.247 20:12:21 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:01:37.247 20:12:21 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:37.247 20:12:21 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0 00:01:37.247 20:12:21 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:37.247 20:12:21 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:37.247 20:12:21 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:37.247 20:12:21 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:37.247 20:12:21 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:37.247 20:12:21 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:37.247 20:12:21 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:37.247 20:12:21 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:37.247 20:12:21 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:37.247 20:12:21 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:37.247 20:12:21 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:37.247 20:12:21 -- paths/export.sh@5 -- $ export PATH 00:01:37.247 20:12:21 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:37.247 20:12:21 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:37.247 20:12:21 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:37.247 20:12:21 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1734034341.XXXXXX 00:01:37.247 20:12:21 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1734034341.JzvKyU 00:01:37.247 20:12:21 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:37.247 20:12:21 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:37.247 20:12:21 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:37.247 20:12:21 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:37.247 20:12:21 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:37.247 20:12:21 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:37.247 20:12:21 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:37.247 20:12:21 -- common/autotest_common.sh@10 -- $ set +x 00:01:37.247 20:12:21 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:01:37.247 20:12:21 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:37.247 20:12:21 -- pm/common@17 -- $ local monitor 00:01:37.247 20:12:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:37.247 20:12:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:37.247 20:12:21 -- pm/common@25 -- $ sleep 1 00:01:37.247 20:12:21 -- pm/common@21 -- $ date +%s 00:01:37.247 20:12:21 -- pm/common@21 -- $ date +%s 00:01:37.247 20:12:21 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1734034341 00:01:37.247 20:12:21 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1734034341 00:01:37.247 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1734034341_collect-cpu-load.pm.log 00:01:37.247 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1734034341_collect-vmstat.pm.log 00:01:38.190 20:12:22 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:38.190 20:12:22 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:38.190 20:12:22 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:38.190 20:12:22 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:38.190 20:12:22 -- spdk/autobuild.sh@16 -- $ date -u 00:01:38.190 Thu Dec 12 08:12:22 PM UTC 2024 00:01:38.190 20:12:22 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:38.190 v25.01-rc1-3-gdc2db8405 00:01:38.190 20:12:22 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:38.190 20:12:22 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:38.190 20:12:22 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:38.190 20:12:22 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:38.190 20:12:22 -- common/autotest_common.sh@10 -- $ set +x 00:01:38.190 ************************************ 00:01:38.190 START TEST asan 00:01:38.190 ************************************ 00:01:38.190 using asan 00:01:38.190 20:12:22 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:01:38.190 00:01:38.190 real 0m0.000s 00:01:38.190 user 0m0.000s 00:01:38.190 sys 0m0.000s 00:01:38.190 20:12:22 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:38.190 20:12:22 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:38.190 ************************************ 00:01:38.190 END TEST asan 00:01:38.190 ************************************ 00:01:38.452 20:12:22 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:38.452 20:12:22 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:38.452 20:12:22 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:38.452 20:12:22 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:38.452 20:12:22 -- common/autotest_common.sh@10 -- $ set +x 00:01:38.452 ************************************ 00:01:38.452 START TEST ubsan 00:01:38.452 ************************************ 00:01:38.452 using ubsan 00:01:38.452 20:12:22 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:38.452 00:01:38.452 real 0m0.000s 00:01:38.452 user 0m0.000s 00:01:38.452 sys 0m0.000s 00:01:38.452 ************************************ 00:01:38.452 END TEST ubsan 00:01:38.452 ************************************ 00:01:38.452 20:12:22 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:38.452 20:12:22 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:38.452 20:12:22 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:38.452 20:12:22 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:38.452 20:12:22 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:38.452 20:12:22 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:38.452 20:12:22 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:38.452 20:12:22 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:38.452 20:12:22 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:38.452 20:12:22 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:38.452 20:12:22 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:01:38.452 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:38.452 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:39.026 Using 'verbs' RDMA provider 00:01:52.206 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:02.209 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:02.209 Creating mk/config.mk...done. 00:02:02.209 Creating mk/cc.flags.mk...done. 00:02:02.209 Type 'make' to build. 00:02:02.209 20:12:45 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:02.209 20:12:45 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:02.209 20:12:45 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:02.209 20:12:45 -- common/autotest_common.sh@10 -- $ set +x 00:02:02.209 ************************************ 00:02:02.209 START TEST make 00:02:02.209 ************************************ 00:02:02.209 20:12:45 make -- common/autotest_common.sh@1129 -- $ make -j10 00:02:02.209 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:02:02.209 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:02:02.209 meson setup builddir \ 00:02:02.209 -Dwith-libaio=enabled \ 00:02:02.209 -Dwith-liburing=enabled \ 00:02:02.209 -Dwith-libvfn=disabled \ 00:02:02.209 -Dwith-spdk=disabled \ 00:02:02.209 -Dexamples=false \ 00:02:02.209 -Dtests=false \ 00:02:02.209 -Dtools=false && \ 00:02:02.209 meson compile -C builddir && \ 00:02:02.209 cd -) 00:02:04.757 The Meson build system 00:02:04.757 Version: 1.5.0 00:02:04.757 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:02:04.757 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:04.757 Build type: native build 00:02:04.757 Project name: xnvme 00:02:04.757 Project version: 0.7.5 00:02:04.757 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:04.757 C linker for the host machine: cc ld.bfd 2.40-14 00:02:04.757 Host machine cpu family: x86_64 00:02:04.757 Host machine cpu: x86_64 00:02:04.757 Message: host_machine.system: linux 00:02:04.757 Compiler for C supports arguments -Wno-missing-braces: YES 00:02:04.757 Compiler for C supports arguments -Wno-cast-function-type: YES 00:02:04.757 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:04.757 Run-time dependency threads found: YES 00:02:04.757 Has header "setupapi.h" : NO 00:02:04.757 Has header "linux/blkzoned.h" : YES 00:02:04.757 Has header "linux/blkzoned.h" : YES (cached) 00:02:04.757 Has header "libaio.h" : YES 00:02:04.757 Library aio found: YES 00:02:04.757 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:04.757 Run-time dependency liburing found: YES 2.2 00:02:04.757 Dependency libvfn skipped: feature with-libvfn disabled 00:02:04.757 Found CMake: /usr/bin/cmake (3.27.7) 00:02:04.757 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:02:04.757 Subproject spdk : skipped: feature with-spdk disabled 00:02:04.757 Run-time dependency appleframeworks found: NO (tried framework) 00:02:04.757 Run-time dependency appleframeworks found: NO (tried framework) 00:02:04.757 Library rt found: YES 00:02:04.757 Checking for function "clock_gettime" with dependency -lrt: YES 00:02:04.757 Configuring xnvme_config.h using configuration 00:02:04.757 Configuring xnvme.spec using configuration 00:02:04.757 Run-time dependency bash-completion found: YES 2.11 00:02:04.757 Message: Bash-completions: /usr/share/bash-completion/completions 00:02:04.757 Program cp found: YES (/usr/bin/cp) 00:02:04.757 Build targets in project: 3 00:02:04.757 00:02:04.757 xnvme 0.7.5 00:02:04.757 00:02:04.757 Subprojects 00:02:04.757 spdk : NO Feature 'with-spdk' disabled 00:02:04.757 00:02:04.757 User defined options 00:02:04.757 examples : false 00:02:04.757 tests : false 00:02:04.757 tools : false 00:02:04.757 with-libaio : enabled 00:02:04.757 with-liburing: enabled 00:02:04.757 with-libvfn : disabled 00:02:04.757 with-spdk : disabled 00:02:04.757 00:02:04.757 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:05.018 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:02:05.018 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:02:05.018 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:02:05.018 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:02:05.018 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:02:05.277 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:02:05.277 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:02:05.277 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:02:05.277 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:02:05.277 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:02:05.277 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:02:05.277 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:02:05.277 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:02:05.277 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:02:05.277 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:02:05.277 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:02:05.277 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:02:05.277 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:02:05.277 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:02:05.277 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:02:05.277 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:02:05.277 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:02:05.277 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:02:05.277 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:02:05.536 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:02:05.536 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:02:05.536 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:02:05.536 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:02:05.536 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:02:05.536 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:02:05.536 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:02:05.536 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:02:05.536 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:02:05.536 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:02:05.536 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:02:05.536 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:02:05.536 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:02:05.536 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:02:05.536 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:02:05.536 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:02:05.536 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:02:05.536 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:02:05.536 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:02:05.536 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:02:05.536 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:02:05.536 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:02:05.536 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:02:05.536 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:02:05.536 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:02:05.536 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:02:05.536 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:02:05.536 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:02:05.536 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:02:05.536 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:02:05.536 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:02:05.536 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:02:05.536 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:02:05.536 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:02:05.536 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:02:05.794 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:02:05.794 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:02:05.794 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:02:05.794 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:02:05.794 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:02:05.794 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:02:05.794 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:02:05.794 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:02:05.794 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:02:05.794 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:02:05.794 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:02:05.794 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:02:05.795 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:02:05.795 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:02:06.052 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:02:06.310 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:02:06.311 [75/76] Linking static target lib/libxnvme.a 00:02:06.311 [76/76] Linking target lib/libxnvme.so.0.7.5 00:02:06.311 INFO: autodetecting backend as ninja 00:02:06.311 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:06.311 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:02:12.883 The Meson build system 00:02:12.883 Version: 1.5.0 00:02:12.883 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:12.883 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:12.883 Build type: native build 00:02:12.883 Program cat found: YES (/usr/bin/cat) 00:02:12.883 Project name: DPDK 00:02:12.883 Project version: 24.03.0 00:02:12.883 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:12.883 C linker for the host machine: cc ld.bfd 2.40-14 00:02:12.883 Host machine cpu family: x86_64 00:02:12.883 Host machine cpu: x86_64 00:02:12.883 Message: ## Building in Developer Mode ## 00:02:12.883 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:12.883 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:12.883 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:12.883 Program python3 found: YES (/usr/bin/python3) 00:02:12.883 Program cat found: YES (/usr/bin/cat) 00:02:12.883 Compiler for C supports arguments -march=native: YES 00:02:12.883 Checking for size of "void *" : 8 00:02:12.883 Checking for size of "void *" : 8 (cached) 00:02:12.883 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:12.883 Library m found: YES 00:02:12.883 Library numa found: YES 00:02:12.883 Has header "numaif.h" : YES 00:02:12.883 Library fdt found: NO 00:02:12.883 Library execinfo found: NO 00:02:12.883 Has header "execinfo.h" : YES 00:02:12.883 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:12.883 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:12.883 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:12.883 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:12.883 Run-time dependency openssl found: YES 3.1.1 00:02:12.883 Run-time dependency libpcap found: YES 1.10.4 00:02:12.883 Has header "pcap.h" with dependency libpcap: YES 00:02:12.883 Compiler for C supports arguments -Wcast-qual: YES 00:02:12.883 Compiler for C supports arguments -Wdeprecated: YES 00:02:12.883 Compiler for C supports arguments -Wformat: YES 00:02:12.883 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:12.883 Compiler for C supports arguments -Wformat-security: NO 00:02:12.883 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:12.883 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:12.883 Compiler for C supports arguments -Wnested-externs: YES 00:02:12.883 Compiler for C supports arguments -Wold-style-definition: YES 00:02:12.883 Compiler for C supports arguments -Wpointer-arith: YES 00:02:12.883 Compiler for C supports arguments -Wsign-compare: YES 00:02:12.883 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:12.883 Compiler for C supports arguments -Wundef: YES 00:02:12.883 Compiler for C supports arguments -Wwrite-strings: YES 00:02:12.883 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:12.883 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:12.883 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:12.884 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:12.884 Program objdump found: YES (/usr/bin/objdump) 00:02:12.884 Compiler for C supports arguments -mavx512f: YES 00:02:12.884 Checking if "AVX512 checking" compiles: YES 00:02:12.884 Fetching value of define "__SSE4_2__" : 1 00:02:12.884 Fetching value of define "__AES__" : 1 00:02:12.884 Fetching value of define "__AVX__" : 1 00:02:12.884 Fetching value of define "__AVX2__" : 1 00:02:12.884 Fetching value of define "__AVX512BW__" : 1 00:02:12.884 Fetching value of define "__AVX512CD__" : 1 00:02:12.884 Fetching value of define "__AVX512DQ__" : 1 00:02:12.884 Fetching value of define "__AVX512F__" : 1 00:02:12.884 Fetching value of define "__AVX512VL__" : 1 00:02:12.884 Fetching value of define "__PCLMUL__" : 1 00:02:12.884 Fetching value of define "__RDRND__" : 1 00:02:12.884 Fetching value of define "__RDSEED__" : 1 00:02:12.884 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:12.884 Fetching value of define "__znver1__" : (undefined) 00:02:12.884 Fetching value of define "__znver2__" : (undefined) 00:02:12.884 Fetching value of define "__znver3__" : (undefined) 00:02:12.884 Fetching value of define "__znver4__" : (undefined) 00:02:12.884 Library asan found: YES 00:02:12.884 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:12.884 Message: lib/log: Defining dependency "log" 00:02:12.884 Message: lib/kvargs: Defining dependency "kvargs" 00:02:12.884 Message: lib/telemetry: Defining dependency "telemetry" 00:02:12.884 Library rt found: YES 00:02:12.884 Checking for function "getentropy" : NO 00:02:12.884 Message: lib/eal: Defining dependency "eal" 00:02:12.884 Message: lib/ring: Defining dependency "ring" 00:02:12.884 Message: lib/rcu: Defining dependency "rcu" 00:02:12.884 Message: lib/mempool: Defining dependency "mempool" 00:02:12.884 Message: lib/mbuf: Defining dependency "mbuf" 00:02:12.884 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:12.884 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:12.884 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:12.884 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:12.884 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:12.884 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:12.884 Compiler for C supports arguments -mpclmul: YES 00:02:12.884 Compiler for C supports arguments -maes: YES 00:02:12.884 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:12.884 Compiler for C supports arguments -mavx512bw: YES 00:02:12.884 Compiler for C supports arguments -mavx512dq: YES 00:02:12.884 Compiler for C supports arguments -mavx512vl: YES 00:02:12.884 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:12.884 Compiler for C supports arguments -mavx2: YES 00:02:12.884 Compiler for C supports arguments -mavx: YES 00:02:12.884 Message: lib/net: Defining dependency "net" 00:02:12.884 Message: lib/meter: Defining dependency "meter" 00:02:12.884 Message: lib/ethdev: Defining dependency "ethdev" 00:02:12.884 Message: lib/pci: Defining dependency "pci" 00:02:12.884 Message: lib/cmdline: Defining dependency "cmdline" 00:02:12.884 Message: lib/hash: Defining dependency "hash" 00:02:12.884 Message: lib/timer: Defining dependency "timer" 00:02:12.884 Message: lib/compressdev: Defining dependency "compressdev" 00:02:12.884 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:12.884 Message: lib/dmadev: Defining dependency "dmadev" 00:02:12.884 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:12.884 Message: lib/power: Defining dependency "power" 00:02:12.884 Message: lib/reorder: Defining dependency "reorder" 00:02:12.884 Message: lib/security: Defining dependency "security" 00:02:12.884 Has header "linux/userfaultfd.h" : YES 00:02:12.884 Has header "linux/vduse.h" : YES 00:02:12.884 Message: lib/vhost: Defining dependency "vhost" 00:02:12.884 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:12.884 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:12.884 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:12.884 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:12.884 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:12.884 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:12.884 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:12.884 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:12.884 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:12.884 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:12.884 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:12.884 Configuring doxy-api-html.conf using configuration 00:02:12.884 Configuring doxy-api-man.conf using configuration 00:02:12.884 Program mandb found: YES (/usr/bin/mandb) 00:02:12.884 Program sphinx-build found: NO 00:02:12.884 Configuring rte_build_config.h using configuration 00:02:12.884 Message: 00:02:12.884 ================= 00:02:12.884 Applications Enabled 00:02:12.884 ================= 00:02:12.884 00:02:12.884 apps: 00:02:12.884 00:02:12.884 00:02:12.884 Message: 00:02:12.884 ================= 00:02:12.884 Libraries Enabled 00:02:12.884 ================= 00:02:12.884 00:02:12.884 libs: 00:02:12.884 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:12.884 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:12.884 cryptodev, dmadev, power, reorder, security, vhost, 00:02:12.884 00:02:12.884 Message: 00:02:12.884 =============== 00:02:12.884 Drivers Enabled 00:02:12.884 =============== 00:02:12.884 00:02:12.884 common: 00:02:12.884 00:02:12.884 bus: 00:02:12.884 pci, vdev, 00:02:12.884 mempool: 00:02:12.884 ring, 00:02:12.884 dma: 00:02:12.884 00:02:12.884 net: 00:02:12.884 00:02:12.884 crypto: 00:02:12.884 00:02:12.884 compress: 00:02:12.884 00:02:12.884 vdpa: 00:02:12.884 00:02:12.884 00:02:12.884 Message: 00:02:12.884 ================= 00:02:12.884 Content Skipped 00:02:12.884 ================= 00:02:12.884 00:02:12.884 apps: 00:02:12.884 dumpcap: explicitly disabled via build config 00:02:12.884 graph: explicitly disabled via build config 00:02:12.884 pdump: explicitly disabled via build config 00:02:12.884 proc-info: explicitly disabled via build config 00:02:12.884 test-acl: explicitly disabled via build config 00:02:12.884 test-bbdev: explicitly disabled via build config 00:02:12.884 test-cmdline: explicitly disabled via build config 00:02:12.884 test-compress-perf: explicitly disabled via build config 00:02:12.884 test-crypto-perf: explicitly disabled via build config 00:02:12.884 test-dma-perf: explicitly disabled via build config 00:02:12.884 test-eventdev: explicitly disabled via build config 00:02:12.884 test-fib: explicitly disabled via build config 00:02:12.884 test-flow-perf: explicitly disabled via build config 00:02:12.884 test-gpudev: explicitly disabled via build config 00:02:12.884 test-mldev: explicitly disabled via build config 00:02:12.884 test-pipeline: explicitly disabled via build config 00:02:12.884 test-pmd: explicitly disabled via build config 00:02:12.884 test-regex: explicitly disabled via build config 00:02:12.884 test-sad: explicitly disabled via build config 00:02:12.885 test-security-perf: explicitly disabled via build config 00:02:12.885 00:02:12.885 libs: 00:02:12.885 argparse: explicitly disabled via build config 00:02:12.885 metrics: explicitly disabled via build config 00:02:12.885 acl: explicitly disabled via build config 00:02:12.885 bbdev: explicitly disabled via build config 00:02:12.885 bitratestats: explicitly disabled via build config 00:02:12.885 bpf: explicitly disabled via build config 00:02:12.885 cfgfile: explicitly disabled via build config 00:02:12.885 distributor: explicitly disabled via build config 00:02:12.885 efd: explicitly disabled via build config 00:02:12.885 eventdev: explicitly disabled via build config 00:02:12.885 dispatcher: explicitly disabled via build config 00:02:12.885 gpudev: explicitly disabled via build config 00:02:12.885 gro: explicitly disabled via build config 00:02:12.885 gso: explicitly disabled via build config 00:02:12.885 ip_frag: explicitly disabled via build config 00:02:12.885 jobstats: explicitly disabled via build config 00:02:12.885 latencystats: explicitly disabled via build config 00:02:12.885 lpm: explicitly disabled via build config 00:02:12.885 member: explicitly disabled via build config 00:02:12.885 pcapng: explicitly disabled via build config 00:02:12.885 rawdev: explicitly disabled via build config 00:02:12.885 regexdev: explicitly disabled via build config 00:02:12.885 mldev: explicitly disabled via build config 00:02:12.885 rib: explicitly disabled via build config 00:02:12.885 sched: explicitly disabled via build config 00:02:12.885 stack: explicitly disabled via build config 00:02:12.885 ipsec: explicitly disabled via build config 00:02:12.885 pdcp: explicitly disabled via build config 00:02:12.885 fib: explicitly disabled via build config 00:02:12.885 port: explicitly disabled via build config 00:02:12.885 pdump: explicitly disabled via build config 00:02:12.885 table: explicitly disabled via build config 00:02:12.885 pipeline: explicitly disabled via build config 00:02:12.885 graph: explicitly disabled via build config 00:02:12.885 node: explicitly disabled via build config 00:02:12.885 00:02:12.885 drivers: 00:02:12.885 common/cpt: not in enabled drivers build config 00:02:12.885 common/dpaax: not in enabled drivers build config 00:02:12.885 common/iavf: not in enabled drivers build config 00:02:12.885 common/idpf: not in enabled drivers build config 00:02:12.885 common/ionic: not in enabled drivers build config 00:02:12.885 common/mvep: not in enabled drivers build config 00:02:12.885 common/octeontx: not in enabled drivers build config 00:02:12.885 bus/auxiliary: not in enabled drivers build config 00:02:12.885 bus/cdx: not in enabled drivers build config 00:02:12.885 bus/dpaa: not in enabled drivers build config 00:02:12.885 bus/fslmc: not in enabled drivers build config 00:02:12.885 bus/ifpga: not in enabled drivers build config 00:02:12.885 bus/platform: not in enabled drivers build config 00:02:12.885 bus/uacce: not in enabled drivers build config 00:02:12.885 bus/vmbus: not in enabled drivers build config 00:02:12.885 common/cnxk: not in enabled drivers build config 00:02:12.885 common/mlx5: not in enabled drivers build config 00:02:12.885 common/nfp: not in enabled drivers build config 00:02:12.885 common/nitrox: not in enabled drivers build config 00:02:12.885 common/qat: not in enabled drivers build config 00:02:12.885 common/sfc_efx: not in enabled drivers build config 00:02:12.885 mempool/bucket: not in enabled drivers build config 00:02:12.885 mempool/cnxk: not in enabled drivers build config 00:02:12.885 mempool/dpaa: not in enabled drivers build config 00:02:12.885 mempool/dpaa2: not in enabled drivers build config 00:02:12.885 mempool/octeontx: not in enabled drivers build config 00:02:12.885 mempool/stack: not in enabled drivers build config 00:02:12.885 dma/cnxk: not in enabled drivers build config 00:02:12.885 dma/dpaa: not in enabled drivers build config 00:02:12.885 dma/dpaa2: not in enabled drivers build config 00:02:12.885 dma/hisilicon: not in enabled drivers build config 00:02:12.885 dma/idxd: not in enabled drivers build config 00:02:12.885 dma/ioat: not in enabled drivers build config 00:02:12.885 dma/skeleton: not in enabled drivers build config 00:02:12.885 net/af_packet: not in enabled drivers build config 00:02:12.885 net/af_xdp: not in enabled drivers build config 00:02:12.885 net/ark: not in enabled drivers build config 00:02:12.885 net/atlantic: not in enabled drivers build config 00:02:12.885 net/avp: not in enabled drivers build config 00:02:12.885 net/axgbe: not in enabled drivers build config 00:02:12.885 net/bnx2x: not in enabled drivers build config 00:02:12.885 net/bnxt: not in enabled drivers build config 00:02:12.885 net/bonding: not in enabled drivers build config 00:02:12.885 net/cnxk: not in enabled drivers build config 00:02:12.885 net/cpfl: not in enabled drivers build config 00:02:12.885 net/cxgbe: not in enabled drivers build config 00:02:12.885 net/dpaa: not in enabled drivers build config 00:02:12.885 net/dpaa2: not in enabled drivers build config 00:02:12.885 net/e1000: not in enabled drivers build config 00:02:12.885 net/ena: not in enabled drivers build config 00:02:12.885 net/enetc: not in enabled drivers build config 00:02:12.885 net/enetfec: not in enabled drivers build config 00:02:12.885 net/enic: not in enabled drivers build config 00:02:12.885 net/failsafe: not in enabled drivers build config 00:02:12.885 net/fm10k: not in enabled drivers build config 00:02:12.885 net/gve: not in enabled drivers build config 00:02:12.885 net/hinic: not in enabled drivers build config 00:02:12.885 net/hns3: not in enabled drivers build config 00:02:12.885 net/i40e: not in enabled drivers build config 00:02:12.885 net/iavf: not in enabled drivers build config 00:02:12.885 net/ice: not in enabled drivers build config 00:02:12.885 net/idpf: not in enabled drivers build config 00:02:12.885 net/igc: not in enabled drivers build config 00:02:12.885 net/ionic: not in enabled drivers build config 00:02:12.885 net/ipn3ke: not in enabled drivers build config 00:02:12.885 net/ixgbe: not in enabled drivers build config 00:02:12.885 net/mana: not in enabled drivers build config 00:02:12.885 net/memif: not in enabled drivers build config 00:02:12.885 net/mlx4: not in enabled drivers build config 00:02:12.885 net/mlx5: not in enabled drivers build config 00:02:12.885 net/mvneta: not in enabled drivers build config 00:02:12.885 net/mvpp2: not in enabled drivers build config 00:02:12.885 net/netvsc: not in enabled drivers build config 00:02:12.885 net/nfb: not in enabled drivers build config 00:02:12.885 net/nfp: not in enabled drivers build config 00:02:12.885 net/ngbe: not in enabled drivers build config 00:02:12.885 net/null: not in enabled drivers build config 00:02:12.885 net/octeontx: not in enabled drivers build config 00:02:12.885 net/octeon_ep: not in enabled drivers build config 00:02:12.885 net/pcap: not in enabled drivers build config 00:02:12.885 net/pfe: not in enabled drivers build config 00:02:12.885 net/qede: not in enabled drivers build config 00:02:12.885 net/ring: not in enabled drivers build config 00:02:12.885 net/sfc: not in enabled drivers build config 00:02:12.885 net/softnic: not in enabled drivers build config 00:02:12.885 net/tap: not in enabled drivers build config 00:02:12.885 net/thunderx: not in enabled drivers build config 00:02:12.885 net/txgbe: not in enabled drivers build config 00:02:12.885 net/vdev_netvsc: not in enabled drivers build config 00:02:12.885 net/vhost: not in enabled drivers build config 00:02:12.885 net/virtio: not in enabled drivers build config 00:02:12.885 net/vmxnet3: not in enabled drivers build config 00:02:12.885 raw/*: missing internal dependency, "rawdev" 00:02:12.885 crypto/armv8: not in enabled drivers build config 00:02:12.885 crypto/bcmfs: not in enabled drivers build config 00:02:12.885 crypto/caam_jr: not in enabled drivers build config 00:02:12.885 crypto/ccp: not in enabled drivers build config 00:02:12.885 crypto/cnxk: not in enabled drivers build config 00:02:12.885 crypto/dpaa_sec: not in enabled drivers build config 00:02:12.885 crypto/dpaa2_sec: not in enabled drivers build config 00:02:12.885 crypto/ipsec_mb: not in enabled drivers build config 00:02:12.885 crypto/mlx5: not in enabled drivers build config 00:02:12.885 crypto/mvsam: not in enabled drivers build config 00:02:12.885 crypto/nitrox: not in enabled drivers build config 00:02:12.885 crypto/null: not in enabled drivers build config 00:02:12.885 crypto/octeontx: not in enabled drivers build config 00:02:12.885 crypto/openssl: not in enabled drivers build config 00:02:12.885 crypto/scheduler: not in enabled drivers build config 00:02:12.885 crypto/uadk: not in enabled drivers build config 00:02:12.885 crypto/virtio: not in enabled drivers build config 00:02:12.885 compress/isal: not in enabled drivers build config 00:02:12.885 compress/mlx5: not in enabled drivers build config 00:02:12.885 compress/nitrox: not in enabled drivers build config 00:02:12.885 compress/octeontx: not in enabled drivers build config 00:02:12.885 compress/zlib: not in enabled drivers build config 00:02:12.885 regex/*: missing internal dependency, "regexdev" 00:02:12.885 ml/*: missing internal dependency, "mldev" 00:02:12.885 vdpa/ifc: not in enabled drivers build config 00:02:12.885 vdpa/mlx5: not in enabled drivers build config 00:02:12.885 vdpa/nfp: not in enabled drivers build config 00:02:12.885 vdpa/sfc: not in enabled drivers build config 00:02:12.885 event/*: missing internal dependency, "eventdev" 00:02:12.885 baseband/*: missing internal dependency, "bbdev" 00:02:12.885 gpu/*: missing internal dependency, "gpudev" 00:02:12.885 00:02:12.885 00:02:12.885 Build targets in project: 84 00:02:12.885 00:02:12.885 DPDK 24.03.0 00:02:12.885 00:02:12.885 User defined options 00:02:12.885 buildtype : debug 00:02:12.885 default_library : shared 00:02:12.885 libdir : lib 00:02:12.885 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:12.885 b_sanitize : address 00:02:12.885 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:12.885 c_link_args : 00:02:12.885 cpu_instruction_set: native 00:02:12.886 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:12.886 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:12.886 enable_docs : false 00:02:12.886 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:12.886 enable_kmods : false 00:02:12.886 max_lcores : 128 00:02:12.886 tests : false 00:02:12.886 00:02:12.886 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:13.144 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:13.144 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:13.144 [2/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:13.144 [3/267] Linking static target lib/librte_kvargs.a 00:02:13.144 [4/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:13.144 [5/267] Linking static target lib/librte_log.a 00:02:13.144 [6/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:13.403 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:13.403 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:13.403 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:13.403 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:13.403 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:13.403 [12/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.403 [13/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:13.403 [14/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:13.403 [15/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:13.403 [16/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:13.661 [17/267] Linking static target lib/librte_telemetry.a 00:02:13.661 [18/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:13.661 [19/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:13.661 [20/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:13.920 [21/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:13.920 [22/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:13.920 [23/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.920 [24/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:13.920 [25/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:13.920 [26/267] Linking target lib/librte_log.so.24.1 00:02:14.178 [27/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:14.178 [28/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:14.178 [29/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:14.178 [30/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:14.178 [31/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:14.178 [32/267] Linking target lib/librte_kvargs.so.24.1 00:02:14.178 [33/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.178 [34/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:14.178 [35/267] Linking target lib/librte_telemetry.so.24.1 00:02:14.178 [36/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:14.437 [37/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:14.437 [38/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:14.437 [39/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:14.437 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:14.437 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:14.437 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:14.437 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:14.437 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:14.437 [45/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:14.707 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:14.707 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:14.707 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:14.707 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:14.707 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:14.707 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:15.020 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:15.020 [53/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:15.020 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:15.020 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:15.020 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:15.020 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:15.020 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:15.279 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:15.279 [60/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:15.279 [61/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:15.279 [62/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:15.279 [63/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:15.279 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:15.279 [65/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:15.279 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:15.279 [67/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:15.537 [68/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:15.537 [69/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:15.537 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:15.537 [71/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:15.537 [72/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:15.537 [73/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:15.537 [74/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:15.537 [75/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:15.795 [76/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:15.795 [77/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:15.795 [78/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:15.795 [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:15.795 [80/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:15.795 [81/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:16.052 [82/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:16.052 [83/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:16.052 [84/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:16.052 [85/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:16.052 [86/267] Linking static target lib/librte_ring.a 00:02:16.052 [87/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:16.310 [88/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:16.310 [89/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:16.310 [90/267] Linking static target lib/librte_eal.a 00:02:16.310 [91/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:16.310 [92/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:16.310 [93/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:16.310 [94/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:16.310 [95/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:16.310 [96/267] Linking static target lib/librte_rcu.a 00:02:16.310 [97/267] Linking static target lib/librte_mempool.a 00:02:16.568 [98/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.568 [99/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:16.568 [100/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:16.826 [101/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:16.826 [102/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:16.826 [103/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.826 [104/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:16.826 [105/267] Linking static target lib/librte_mbuf.a 00:02:16.826 [106/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:16.826 [107/267] Linking static target lib/librte_meter.a 00:02:16.826 [108/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:16.826 [109/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:16.826 [110/267] Linking static target lib/librte_net.a 00:02:17.083 [111/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:17.083 [112/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:17.083 [113/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:17.083 [114/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.084 [115/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.341 [116/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.341 [117/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:17.341 [118/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:17.598 [119/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:17.598 [120/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.598 [121/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:17.598 [122/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:17.855 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:17.855 [124/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:17.855 [125/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:17.855 [126/267] Linking static target lib/librte_pci.a 00:02:17.855 [127/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:17.855 [128/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:17.855 [129/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:18.112 [130/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:18.112 [131/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:18.112 [132/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:18.112 [133/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:18.112 [134/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:18.112 [135/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:18.112 [136/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:18.112 [137/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:18.112 [138/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:18.112 [139/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:18.112 [140/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:18.112 [141/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:18.112 [142/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:18.112 [143/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.370 [144/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:18.370 [145/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:18.370 [146/267] Linking static target lib/librte_cmdline.a 00:02:18.628 [147/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:18.628 [148/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:18.628 [149/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:18.628 [150/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:18.628 [151/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:18.628 [152/267] Linking static target lib/librte_timer.a 00:02:18.885 [153/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:18.885 [154/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:18.885 [155/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:18.885 [156/267] Linking static target lib/librte_ethdev.a 00:02:18.885 [157/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:18.885 [158/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:18.885 [159/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:18.885 [160/267] Linking static target lib/librte_hash.a 00:02:18.885 [161/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:19.142 [162/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:19.142 [163/267] Linking static target lib/librte_compressdev.a 00:02:19.142 [164/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:19.142 [165/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.400 [166/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:19.400 [167/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:19.400 [168/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:19.400 [169/267] Linking static target lib/librte_dmadev.a 00:02:19.400 [170/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:19.400 [171/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:19.658 [172/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.658 [173/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:19.658 [174/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:19.916 [175/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.916 [176/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:19.916 [177/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.916 [178/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:19.916 [179/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:19.916 [180/267] Linking static target lib/librte_cryptodev.a 00:02:19.916 [181/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:19.916 [182/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.916 [183/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:19.916 [184/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:20.173 [185/267] Linking static target lib/librte_power.a 00:02:20.173 [186/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:20.431 [187/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:20.431 [188/267] Linking static target lib/librte_reorder.a 00:02:20.431 [189/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:20.431 [190/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:20.431 [191/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:20.431 [192/267] Linking static target lib/librte_security.a 00:02:20.689 [193/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.689 [194/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:20.946 [195/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.946 [196/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.947 [197/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:20.947 [198/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:21.204 [199/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:21.204 [200/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:21.204 [201/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:21.461 [202/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:21.461 [203/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:21.461 [204/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:21.461 [205/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:21.461 [206/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:21.461 [207/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:21.461 [208/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:21.461 [209/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:21.719 [210/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:21.719 [211/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.719 [212/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:21.719 [213/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:21.719 [214/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:21.719 [215/267] Linking static target drivers/librte_bus_vdev.a 00:02:21.719 [216/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:21.719 [217/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:21.719 [218/267] Linking static target drivers/librte_bus_pci.a 00:02:21.977 [219/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:21.977 [220/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:21.977 [221/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:21.977 [222/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.977 [223/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:21.978 [224/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:21.978 [225/267] Linking static target drivers/librte_mempool_ring.a 00:02:22.236 [226/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.493 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:23.427 [228/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.427 [229/267] Linking target lib/librte_eal.so.24.1 00:02:23.685 [230/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:23.685 [231/267] Linking target lib/librte_pci.so.24.1 00:02:23.685 [232/267] Linking target lib/librte_timer.so.24.1 00:02:23.685 [233/267] Linking target lib/librte_meter.so.24.1 00:02:23.685 [234/267] Linking target lib/librte_dmadev.so.24.1 00:02:23.685 [235/267] Linking target lib/librte_ring.so.24.1 00:02:23.685 [236/267] Linking target drivers/librte_bus_vdev.so.24.1 00:02:23.685 [237/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:23.685 [238/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:23.685 [239/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:23.685 [240/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:23.685 [241/267] Linking target drivers/librte_bus_pci.so.24.1 00:02:23.943 [242/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:23.943 [243/267] Linking target lib/librte_rcu.so.24.1 00:02:23.943 [244/267] Linking target lib/librte_mempool.so.24.1 00:02:23.943 [245/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:23.943 [246/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:23.943 [247/267] Linking target drivers/librte_mempool_ring.so.24.1 00:02:23.943 [248/267] Linking target lib/librte_mbuf.so.24.1 00:02:24.202 [249/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:24.202 [250/267] Linking target lib/librte_compressdev.so.24.1 00:02:24.202 [251/267] Linking target lib/librte_reorder.so.24.1 00:02:24.202 [252/267] Linking target lib/librte_net.so.24.1 00:02:24.202 [253/267] Linking target lib/librte_cryptodev.so.24.1 00:02:24.202 [254/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:24.202 [255/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:24.202 [256/267] Linking target lib/librte_hash.so.24.1 00:02:24.202 [257/267] Linking target lib/librte_cmdline.so.24.1 00:02:24.202 [258/267] Linking target lib/librte_security.so.24.1 00:02:24.202 [259/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.459 [260/267] Linking target lib/librte_ethdev.so.24.1 00:02:24.459 [261/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:24.459 [262/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:24.459 [263/267] Linking target lib/librte_power.so.24.1 00:02:25.393 [264/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:25.393 [265/267] Linking static target lib/librte_vhost.a 00:02:26.765 [266/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.765 [267/267] Linking target lib/librte_vhost.so.24.1 00:02:26.766 INFO: autodetecting backend as ninja 00:02:26.766 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:41.644 CC lib/log/log.o 00:02:41.644 CC lib/log/log_flags.o 00:02:41.644 CC lib/log/log_deprecated.o 00:02:41.644 CC lib/ut_mock/mock.o 00:02:41.644 CC lib/ut/ut.o 00:02:41.644 LIB libspdk_ut.a 00:02:41.644 LIB libspdk_log.a 00:02:41.644 SO libspdk_ut.so.2.0 00:02:41.644 LIB libspdk_ut_mock.a 00:02:41.644 SO libspdk_ut_mock.so.6.0 00:02:41.644 SO libspdk_log.so.7.1 00:02:41.644 SYMLINK libspdk_ut.so 00:02:41.644 SYMLINK libspdk_ut_mock.so 00:02:41.644 SYMLINK libspdk_log.so 00:02:41.644 CXX lib/trace_parser/trace.o 00:02:41.644 CC lib/util/cpuset.o 00:02:41.644 CC lib/util/bit_array.o 00:02:41.644 CC lib/util/base64.o 00:02:41.644 CC lib/util/crc16.o 00:02:41.644 CC lib/util/crc32.o 00:02:41.644 CC lib/util/crc32c.o 00:02:41.644 CC lib/dma/dma.o 00:02:41.644 CC lib/ioat/ioat.o 00:02:41.644 CC lib/vfio_user/host/vfio_user_pci.o 00:02:41.644 CC lib/util/crc32_ieee.o 00:02:41.644 CC lib/util/crc64.o 00:02:41.644 CC lib/util/dif.o 00:02:41.644 CC lib/util/fd.o 00:02:41.644 CC lib/util/fd_group.o 00:02:41.644 LIB libspdk_dma.a 00:02:41.644 CC lib/vfio_user/host/vfio_user.o 00:02:41.644 SO libspdk_dma.so.5.0 00:02:41.644 LIB libspdk_ioat.a 00:02:41.644 CC lib/util/file.o 00:02:41.644 CC lib/util/hexlify.o 00:02:41.644 CC lib/util/iov.o 00:02:41.644 SYMLINK libspdk_dma.so 00:02:41.644 SO libspdk_ioat.so.7.0 00:02:41.644 CC lib/util/math.o 00:02:41.644 SYMLINK libspdk_ioat.so 00:02:41.644 CC lib/util/net.o 00:02:41.644 CC lib/util/pipe.o 00:02:41.644 CC lib/util/strerror_tls.o 00:02:41.644 LIB libspdk_vfio_user.a 00:02:41.644 CC lib/util/string.o 00:02:41.644 CC lib/util/uuid.o 00:02:41.644 SO libspdk_vfio_user.so.5.0 00:02:41.644 CC lib/util/xor.o 00:02:41.644 CC lib/util/zipf.o 00:02:41.644 CC lib/util/md5.o 00:02:41.644 SYMLINK libspdk_vfio_user.so 00:02:41.644 LIB libspdk_trace_parser.a 00:02:41.644 SO libspdk_trace_parser.so.6.0 00:02:41.644 LIB libspdk_util.a 00:02:41.644 SO libspdk_util.so.10.1 00:02:41.644 SYMLINK libspdk_trace_parser.so 00:02:41.904 SYMLINK libspdk_util.so 00:02:41.904 CC lib/rdma_utils/rdma_utils.o 00:02:41.904 CC lib/idxd/idxd_user.o 00:02:41.904 CC lib/idxd/idxd_kernel.o 00:02:41.904 CC lib/idxd/idxd.o 00:02:41.904 CC lib/json/json_write.o 00:02:41.904 CC lib/conf/conf.o 00:02:41.904 CC lib/json/json_util.o 00:02:41.904 CC lib/env_dpdk/env.o 00:02:41.904 CC lib/json/json_parse.o 00:02:41.904 CC lib/vmd/vmd.o 00:02:42.165 CC lib/vmd/led.o 00:02:42.165 CC lib/env_dpdk/memory.o 00:02:42.165 LIB libspdk_rdma_utils.a 00:02:42.165 LIB libspdk_conf.a 00:02:42.165 CC lib/env_dpdk/pci.o 00:02:42.165 CC lib/env_dpdk/init.o 00:02:42.165 SO libspdk_rdma_utils.so.1.0 00:02:42.165 LIB libspdk_json.a 00:02:42.165 SO libspdk_conf.so.6.0 00:02:42.165 SO libspdk_json.so.6.0 00:02:42.165 SYMLINK libspdk_rdma_utils.so 00:02:42.165 SYMLINK libspdk_conf.so 00:02:42.165 CC lib/env_dpdk/threads.o 00:02:42.165 CC lib/env_dpdk/pci_ioat.o 00:02:42.165 CC lib/env_dpdk/pci_virtio.o 00:02:42.423 SYMLINK libspdk_json.so 00:02:42.423 CC lib/env_dpdk/pci_vmd.o 00:02:42.423 CC lib/env_dpdk/pci_idxd.o 00:02:42.423 CC lib/env_dpdk/pci_event.o 00:02:42.423 CC lib/rdma_provider/common.o 00:02:42.423 CC lib/env_dpdk/sigbus_handler.o 00:02:42.423 CC lib/env_dpdk/pci_dpdk.o 00:02:42.423 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:42.681 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:42.681 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:42.681 LIB libspdk_idxd.a 00:02:42.681 SO libspdk_idxd.so.12.1 00:02:42.681 LIB libspdk_vmd.a 00:02:42.681 SO libspdk_vmd.so.6.0 00:02:42.681 SYMLINK libspdk_idxd.so 00:02:42.681 SYMLINK libspdk_vmd.so 00:02:42.681 LIB libspdk_rdma_provider.a 00:02:42.681 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:42.681 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:42.681 CC lib/jsonrpc/jsonrpc_server.o 00:02:42.681 CC lib/jsonrpc/jsonrpc_client.o 00:02:42.681 SO libspdk_rdma_provider.so.7.0 00:02:42.939 SYMLINK libspdk_rdma_provider.so 00:02:42.939 LIB libspdk_jsonrpc.a 00:02:42.939 SO libspdk_jsonrpc.so.6.0 00:02:43.197 SYMLINK libspdk_jsonrpc.so 00:02:43.197 LIB libspdk_env_dpdk.a 00:02:43.197 SO libspdk_env_dpdk.so.15.1 00:02:43.197 SYMLINK libspdk_env_dpdk.so 00:02:43.197 CC lib/rpc/rpc.o 00:02:43.455 LIB libspdk_rpc.a 00:02:43.455 SO libspdk_rpc.so.6.0 00:02:43.714 SYMLINK libspdk_rpc.so 00:02:43.714 CC lib/notify/notify_rpc.o 00:02:43.714 CC lib/notify/notify.o 00:02:43.714 CC lib/keyring/keyring.o 00:02:43.714 CC lib/keyring/keyring_rpc.o 00:02:43.714 CC lib/trace/trace.o 00:02:43.714 CC lib/trace/trace_flags.o 00:02:43.714 CC lib/trace/trace_rpc.o 00:02:43.972 LIB libspdk_notify.a 00:02:43.972 SO libspdk_notify.so.6.0 00:02:43.972 SYMLINK libspdk_notify.so 00:02:43.972 LIB libspdk_keyring.a 00:02:43.972 LIB libspdk_trace.a 00:02:43.972 SO libspdk_keyring.so.2.0 00:02:43.972 SO libspdk_trace.so.11.0 00:02:43.972 SYMLINK libspdk_keyring.so 00:02:43.972 SYMLINK libspdk_trace.so 00:02:44.231 CC lib/thread/thread.o 00:02:44.231 CC lib/thread/iobuf.o 00:02:44.231 CC lib/sock/sock.o 00:02:44.231 CC lib/sock/sock_rpc.o 00:02:44.797 LIB libspdk_sock.a 00:02:44.797 SO libspdk_sock.so.10.0 00:02:44.797 SYMLINK libspdk_sock.so 00:02:45.055 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:45.055 CC lib/nvme/nvme_ns_cmd.o 00:02:45.055 CC lib/nvme/nvme_ctrlr.o 00:02:45.055 CC lib/nvme/nvme_fabric.o 00:02:45.055 CC lib/nvme/nvme_ns.o 00:02:45.055 CC lib/nvme/nvme.o 00:02:45.055 CC lib/nvme/nvme_qpair.o 00:02:45.055 CC lib/nvme/nvme_pcie_common.o 00:02:45.055 CC lib/nvme/nvme_pcie.o 00:02:45.622 CC lib/nvme/nvme_quirks.o 00:02:45.622 CC lib/nvme/nvme_transport.o 00:02:45.622 CC lib/nvme/nvme_discovery.o 00:02:45.622 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:45.880 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:45.880 CC lib/nvme/nvme_tcp.o 00:02:45.880 CC lib/nvme/nvme_opal.o 00:02:45.880 LIB libspdk_thread.a 00:02:45.880 CC lib/nvme/nvme_io_msg.o 00:02:45.880 SO libspdk_thread.so.11.0 00:02:45.880 CC lib/nvme/nvme_poll_group.o 00:02:45.880 SYMLINK libspdk_thread.so 00:02:45.880 CC lib/nvme/nvme_zns.o 00:02:46.137 CC lib/nvme/nvme_stubs.o 00:02:46.137 CC lib/nvme/nvme_auth.o 00:02:46.137 CC lib/nvme/nvme_cuse.o 00:02:46.395 CC lib/nvme/nvme_rdma.o 00:02:46.653 CC lib/accel/accel.o 00:02:46.653 CC lib/blob/blobstore.o 00:02:46.653 CC lib/init/json_config.o 00:02:46.653 CC lib/virtio/virtio.o 00:02:46.653 CC lib/fsdev/fsdev.o 00:02:46.911 CC lib/init/subsystem.o 00:02:46.911 CC lib/virtio/virtio_vhost_user.o 00:02:46.911 CC lib/init/subsystem_rpc.o 00:02:46.911 CC lib/init/rpc.o 00:02:47.169 CC lib/fsdev/fsdev_io.o 00:02:47.169 CC lib/blob/request.o 00:02:47.169 CC lib/blob/zeroes.o 00:02:47.169 LIB libspdk_init.a 00:02:47.169 CC lib/blob/blob_bs_dev.o 00:02:47.169 SO libspdk_init.so.6.0 00:02:47.169 SYMLINK libspdk_init.so 00:02:47.169 CC lib/accel/accel_rpc.o 00:02:47.169 CC lib/accel/accel_sw.o 00:02:47.169 CC lib/virtio/virtio_vfio_user.o 00:02:47.169 CC lib/virtio/virtio_pci.o 00:02:47.169 CC lib/fsdev/fsdev_rpc.o 00:02:47.427 LIB libspdk_fsdev.a 00:02:47.427 SO libspdk_fsdev.so.2.0 00:02:47.427 CC lib/event/reactor.o 00:02:47.427 CC lib/event/app.o 00:02:47.427 CC lib/event/log_rpc.o 00:02:47.427 CC lib/event/scheduler_static.o 00:02:47.427 CC lib/event/app_rpc.o 00:02:47.427 SYMLINK libspdk_fsdev.so 00:02:47.427 LIB libspdk_virtio.a 00:02:47.685 LIB libspdk_accel.a 00:02:47.685 SO libspdk_virtio.so.7.0 00:02:47.685 SO libspdk_accel.so.16.0 00:02:47.685 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:47.685 SYMLINK libspdk_virtio.so 00:02:47.685 SYMLINK libspdk_accel.so 00:02:47.685 LIB libspdk_nvme.a 00:02:47.946 LIB libspdk_event.a 00:02:47.946 CC lib/bdev/bdev.o 00:02:47.946 CC lib/bdev/bdev_rpc.o 00:02:47.946 CC lib/bdev/bdev_zone.o 00:02:47.946 CC lib/bdev/part.o 00:02:47.946 CC lib/bdev/scsi_nvme.o 00:02:47.946 SO libspdk_event.so.14.0 00:02:47.946 SYMLINK libspdk_event.so 00:02:47.946 SO libspdk_nvme.so.15.0 00:02:48.211 SYMLINK libspdk_nvme.so 00:02:48.211 LIB libspdk_fuse_dispatcher.a 00:02:48.211 SO libspdk_fuse_dispatcher.so.1.0 00:02:48.211 SYMLINK libspdk_fuse_dispatcher.so 00:02:49.586 LIB libspdk_blob.a 00:02:49.586 SO libspdk_blob.so.12.0 00:02:49.586 SYMLINK libspdk_blob.so 00:02:49.843 CC lib/blobfs/blobfs.o 00:02:49.843 CC lib/blobfs/tree.o 00:02:49.843 CC lib/lvol/lvol.o 00:02:50.775 LIB libspdk_blobfs.a 00:02:50.775 SO libspdk_blobfs.so.11.0 00:02:50.775 SYMLINK libspdk_blobfs.so 00:02:50.775 LIB libspdk_lvol.a 00:02:50.775 SO libspdk_lvol.so.11.0 00:02:50.775 LIB libspdk_bdev.a 00:02:50.775 SO libspdk_bdev.so.17.0 00:02:50.775 SYMLINK libspdk_lvol.so 00:02:50.775 SYMLINK libspdk_bdev.so 00:02:51.036 CC lib/ftl/ftl_core.o 00:02:51.036 CC lib/ftl/ftl_init.o 00:02:51.036 CC lib/ftl/ftl_layout.o 00:02:51.036 CC lib/ftl/ftl_debug.o 00:02:51.036 CC lib/ftl/ftl_io.o 00:02:51.036 CC lib/ftl/ftl_sb.o 00:02:51.036 CC lib/scsi/dev.o 00:02:51.036 CC lib/nbd/nbd.o 00:02:51.036 CC lib/nvmf/ctrlr.o 00:02:51.036 CC lib/ublk/ublk.o 00:02:51.293 CC lib/ublk/ublk_rpc.o 00:02:51.293 CC lib/ftl/ftl_l2p.o 00:02:51.293 CC lib/scsi/lun.o 00:02:51.293 CC lib/scsi/port.o 00:02:51.293 CC lib/scsi/scsi.o 00:02:51.293 CC lib/ftl/ftl_l2p_flat.o 00:02:51.293 CC lib/scsi/scsi_bdev.o 00:02:51.293 CC lib/scsi/scsi_pr.o 00:02:51.293 CC lib/scsi/scsi_rpc.o 00:02:51.293 CC lib/ftl/ftl_nv_cache.o 00:02:51.293 CC lib/nbd/nbd_rpc.o 00:02:51.551 CC lib/ftl/ftl_band.o 00:02:51.551 CC lib/ftl/ftl_band_ops.o 00:02:51.551 CC lib/scsi/task.o 00:02:51.551 CC lib/nvmf/ctrlr_discovery.o 00:02:51.551 LIB libspdk_nbd.a 00:02:51.551 SO libspdk_nbd.so.7.0 00:02:51.551 SYMLINK libspdk_nbd.so 00:02:51.551 CC lib/nvmf/ctrlr_bdev.o 00:02:51.551 CC lib/nvmf/subsystem.o 00:02:51.809 CC lib/nvmf/nvmf.o 00:02:51.809 LIB libspdk_ublk.a 00:02:51.809 SO libspdk_ublk.so.3.0 00:02:51.809 CC lib/ftl/ftl_writer.o 00:02:51.809 CC lib/ftl/ftl_rq.o 00:02:51.809 SYMLINK libspdk_ublk.so 00:02:51.809 CC lib/ftl/ftl_reloc.o 00:02:51.809 LIB libspdk_scsi.a 00:02:51.809 SO libspdk_scsi.so.9.0 00:02:52.067 CC lib/nvmf/nvmf_rpc.o 00:02:52.067 SYMLINK libspdk_scsi.so 00:02:52.067 CC lib/ftl/ftl_l2p_cache.o 00:02:52.067 CC lib/ftl/ftl_p2l.o 00:02:52.067 CC lib/ftl/ftl_p2l_log.o 00:02:52.067 CC lib/ftl/mngt/ftl_mngt.o 00:02:52.325 CC lib/nvmf/transport.o 00:02:52.325 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:52.325 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:52.583 CC lib/iscsi/conn.o 00:02:52.583 CC lib/vhost/vhost.o 00:02:52.583 CC lib/vhost/vhost_rpc.o 00:02:52.583 CC lib/vhost/vhost_scsi.o 00:02:52.583 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:52.583 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:52.583 CC lib/nvmf/tcp.o 00:02:52.840 CC lib/nvmf/stubs.o 00:02:52.840 CC lib/iscsi/init_grp.o 00:02:52.840 CC lib/iscsi/iscsi.o 00:02:52.840 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:52.840 CC lib/iscsi/param.o 00:02:53.098 CC lib/iscsi/portal_grp.o 00:02:53.098 CC lib/iscsi/tgt_node.o 00:02:53.098 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:53.098 CC lib/vhost/vhost_blk.o 00:02:53.098 CC lib/vhost/rte_vhost_user.o 00:02:53.098 CC lib/iscsi/iscsi_subsystem.o 00:02:53.098 CC lib/iscsi/iscsi_rpc.o 00:02:53.356 CC lib/iscsi/task.o 00:02:53.356 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:53.356 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:53.356 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:53.356 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:53.614 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:53.614 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:53.614 CC lib/nvmf/mdns_server.o 00:02:53.614 CC lib/nvmf/rdma.o 00:02:53.614 CC lib/ftl/utils/ftl_conf.o 00:02:53.614 CC lib/ftl/utils/ftl_md.o 00:02:53.614 CC lib/ftl/utils/ftl_mempool.o 00:02:53.875 CC lib/ftl/utils/ftl_bitmap.o 00:02:53.875 CC lib/ftl/utils/ftl_property.o 00:02:53.875 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:53.875 CC lib/nvmf/auth.o 00:02:53.875 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:53.875 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:53.875 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:53.875 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:54.133 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:54.133 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:54.133 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:54.133 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:54.133 LIB libspdk_vhost.a 00:02:54.133 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:54.133 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:54.133 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:54.133 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:54.133 SO libspdk_vhost.so.8.0 00:02:54.133 LIB libspdk_iscsi.a 00:02:54.133 CC lib/ftl/base/ftl_base_dev.o 00:02:54.391 SYMLINK libspdk_vhost.so 00:02:54.391 CC lib/ftl/base/ftl_base_bdev.o 00:02:54.391 CC lib/ftl/ftl_trace.o 00:02:54.391 SO libspdk_iscsi.so.8.0 00:02:54.391 SYMLINK libspdk_iscsi.so 00:02:54.391 LIB libspdk_ftl.a 00:02:54.650 SO libspdk_ftl.so.9.0 00:02:54.921 SYMLINK libspdk_ftl.so 00:02:55.193 LIB libspdk_nvmf.a 00:02:55.452 SO libspdk_nvmf.so.20.0 00:02:55.709 SYMLINK libspdk_nvmf.so 00:02:55.968 CC module/env_dpdk/env_dpdk_rpc.o 00:02:55.968 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:55.968 CC module/accel/error/accel_error.o 00:02:55.968 CC module/sock/posix/posix.o 00:02:55.968 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:55.968 CC module/scheduler/gscheduler/gscheduler.o 00:02:55.968 CC module/accel/ioat/accel_ioat.o 00:02:55.968 CC module/keyring/file/keyring.o 00:02:55.968 CC module/blob/bdev/blob_bdev.o 00:02:55.968 CC module/fsdev/aio/fsdev_aio.o 00:02:55.968 LIB libspdk_env_dpdk_rpc.a 00:02:55.968 SO libspdk_env_dpdk_rpc.so.6.0 00:02:55.968 LIB libspdk_scheduler_gscheduler.a 00:02:55.968 SYMLINK libspdk_env_dpdk_rpc.so 00:02:55.968 LIB libspdk_scheduler_dpdk_governor.a 00:02:55.968 SO libspdk_scheduler_gscheduler.so.4.0 00:02:55.968 CC module/keyring/file/keyring_rpc.o 00:02:55.968 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:55.968 LIB libspdk_scheduler_dynamic.a 00:02:56.226 SYMLINK libspdk_scheduler_gscheduler.so 00:02:56.226 CC module/accel/ioat/accel_ioat_rpc.o 00:02:56.226 SO libspdk_scheduler_dynamic.so.4.0 00:02:56.226 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:56.226 CC module/accel/error/accel_error_rpc.o 00:02:56.226 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:56.226 LIB libspdk_blob_bdev.a 00:02:56.226 SYMLINK libspdk_scheduler_dynamic.so 00:02:56.226 SO libspdk_blob_bdev.so.12.0 00:02:56.226 LIB libspdk_keyring_file.a 00:02:56.226 CC module/accel/dsa/accel_dsa.o 00:02:56.226 LIB libspdk_accel_ioat.a 00:02:56.226 SO libspdk_keyring_file.so.2.0 00:02:56.226 SYMLINK libspdk_blob_bdev.so 00:02:56.226 CC module/accel/dsa/accel_dsa_rpc.o 00:02:56.226 LIB libspdk_accel_error.a 00:02:56.226 SO libspdk_accel_ioat.so.6.0 00:02:56.226 SO libspdk_accel_error.so.2.0 00:02:56.226 CC module/fsdev/aio/linux_aio_mgr.o 00:02:56.226 SYMLINK libspdk_keyring_file.so 00:02:56.226 CC module/accel/iaa/accel_iaa.o 00:02:56.226 SYMLINK libspdk_accel_ioat.so 00:02:56.226 SYMLINK libspdk_accel_error.so 00:02:56.226 CC module/accel/iaa/accel_iaa_rpc.o 00:02:56.226 CC module/keyring/linux/keyring.o 00:02:56.226 CC module/keyring/linux/keyring_rpc.o 00:02:56.485 LIB libspdk_accel_iaa.a 00:02:56.485 LIB libspdk_keyring_linux.a 00:02:56.485 SO libspdk_accel_iaa.so.3.0 00:02:56.485 SO libspdk_keyring_linux.so.1.0 00:02:56.485 CC module/bdev/delay/vbdev_delay.o 00:02:56.485 LIB libspdk_accel_dsa.a 00:02:56.485 LIB libspdk_fsdev_aio.a 00:02:56.485 SYMLINK libspdk_accel_iaa.so 00:02:56.485 SO libspdk_accel_dsa.so.5.0 00:02:56.485 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:56.485 CC module/blobfs/bdev/blobfs_bdev.o 00:02:56.485 SO libspdk_fsdev_aio.so.1.0 00:02:56.485 SYMLINK libspdk_keyring_linux.so 00:02:56.485 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:56.485 CC module/bdev/error/vbdev_error.o 00:02:56.485 SYMLINK libspdk_accel_dsa.so 00:02:56.485 CC module/bdev/error/vbdev_error_rpc.o 00:02:56.485 CC module/bdev/gpt/gpt.o 00:02:56.485 CC module/bdev/lvol/vbdev_lvol.o 00:02:56.485 SYMLINK libspdk_fsdev_aio.so 00:02:56.485 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:56.743 CC module/bdev/gpt/vbdev_gpt.o 00:02:56.743 LIB libspdk_sock_posix.a 00:02:56.743 LIB libspdk_blobfs_bdev.a 00:02:56.743 SO libspdk_sock_posix.so.6.0 00:02:56.743 SO libspdk_blobfs_bdev.so.6.0 00:02:56.743 LIB libspdk_bdev_error.a 00:02:56.743 SYMLINK libspdk_sock_posix.so 00:02:56.743 SO libspdk_bdev_error.so.6.0 00:02:56.743 CC module/bdev/malloc/bdev_malloc.o 00:02:56.743 SYMLINK libspdk_blobfs_bdev.so 00:02:56.743 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:56.743 SYMLINK libspdk_bdev_error.so 00:02:56.743 CC module/bdev/null/bdev_null.o 00:02:56.743 LIB libspdk_bdev_delay.a 00:02:56.743 LIB libspdk_bdev_gpt.a 00:02:56.743 SO libspdk_bdev_delay.so.6.0 00:02:57.001 SO libspdk_bdev_gpt.so.6.0 00:02:57.001 CC module/bdev/nvme/bdev_nvme.o 00:02:57.001 CC module/bdev/passthru/vbdev_passthru.o 00:02:57.002 SYMLINK libspdk_bdev_delay.so 00:02:57.002 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:57.002 SYMLINK libspdk_bdev_gpt.so 00:02:57.002 CC module/bdev/nvme/nvme_rpc.o 00:02:57.002 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:57.002 CC module/bdev/raid/bdev_raid.o 00:02:57.002 LIB libspdk_bdev_lvol.a 00:02:57.002 SO libspdk_bdev_lvol.so.6.0 00:02:57.002 CC module/bdev/null/bdev_null_rpc.o 00:02:57.002 SYMLINK libspdk_bdev_lvol.so 00:02:57.002 CC module/bdev/split/vbdev_split.o 00:02:57.260 LIB libspdk_bdev_passthru.a 00:02:57.260 LIB libspdk_bdev_malloc.a 00:02:57.260 SO libspdk_bdev_passthru.so.6.0 00:02:57.260 LIB libspdk_bdev_null.a 00:02:57.260 SO libspdk_bdev_malloc.so.6.0 00:02:57.260 SO libspdk_bdev_null.so.6.0 00:02:57.260 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:57.260 SYMLINK libspdk_bdev_passthru.so 00:02:57.260 CC module/bdev/nvme/bdev_mdns_client.o 00:02:57.260 SYMLINK libspdk_bdev_malloc.so 00:02:57.260 CC module/bdev/nvme/vbdev_opal.o 00:02:57.260 CC module/bdev/split/vbdev_split_rpc.o 00:02:57.260 SYMLINK libspdk_bdev_null.so 00:02:57.260 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:57.260 CC module/bdev/xnvme/bdev_xnvme.o 00:02:57.260 CC module/bdev/aio/bdev_aio.o 00:02:57.260 CC module/bdev/aio/bdev_aio_rpc.o 00:02:57.260 LIB libspdk_bdev_split.a 00:02:57.518 SO libspdk_bdev_split.so.6.0 00:02:57.518 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:57.518 SYMLINK libspdk_bdev_split.so 00:02:57.518 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:57.518 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:02:57.518 CC module/bdev/raid/bdev_raid_rpc.o 00:02:57.518 LIB libspdk_bdev_zone_block.a 00:02:57.518 LIB libspdk_bdev_aio.a 00:02:57.518 SO libspdk_bdev_zone_block.so.6.0 00:02:57.518 CC module/bdev/ftl/bdev_ftl.o 00:02:57.518 SO libspdk_bdev_aio.so.6.0 00:02:57.518 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:57.518 CC module/bdev/iscsi/bdev_iscsi.o 00:02:57.518 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:57.518 SYMLINK libspdk_bdev_zone_block.so 00:02:57.518 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:57.777 SYMLINK libspdk_bdev_aio.so 00:02:57.777 LIB libspdk_bdev_xnvme.a 00:02:57.777 CC module/bdev/raid/bdev_raid_sb.o 00:02:57.777 SO libspdk_bdev_xnvme.so.3.0 00:02:57.777 CC module/bdev/raid/raid0.o 00:02:57.777 SYMLINK libspdk_bdev_xnvme.so 00:02:57.777 CC module/bdev/raid/raid1.o 00:02:57.777 CC module/bdev/raid/concat.o 00:02:57.777 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:57.777 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:57.777 LIB libspdk_bdev_ftl.a 00:02:57.777 LIB libspdk_bdev_iscsi.a 00:02:58.035 SO libspdk_bdev_ftl.so.6.0 00:02:58.035 SO libspdk_bdev_iscsi.so.6.0 00:02:58.035 SYMLINK libspdk_bdev_ftl.so 00:02:58.035 SYMLINK libspdk_bdev_iscsi.so 00:02:58.035 LIB libspdk_bdev_raid.a 00:02:58.035 SO libspdk_bdev_raid.so.6.0 00:02:58.035 LIB libspdk_bdev_virtio.a 00:02:58.035 SO libspdk_bdev_virtio.so.6.0 00:02:58.035 SYMLINK libspdk_bdev_raid.so 00:02:58.035 SYMLINK libspdk_bdev_virtio.so 00:02:58.970 LIB libspdk_bdev_nvme.a 00:02:58.970 SO libspdk_bdev_nvme.so.7.1 00:02:59.228 SYMLINK libspdk_bdev_nvme.so 00:02:59.487 CC module/event/subsystems/sock/sock.o 00:02:59.487 CC module/event/subsystems/fsdev/fsdev.o 00:02:59.487 CC module/event/subsystems/scheduler/scheduler.o 00:02:59.487 CC module/event/subsystems/iobuf/iobuf.o 00:02:59.487 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:59.487 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:59.487 CC module/event/subsystems/vmd/vmd.o 00:02:59.487 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:59.487 CC module/event/subsystems/keyring/keyring.o 00:02:59.487 LIB libspdk_event_keyring.a 00:02:59.487 LIB libspdk_event_scheduler.a 00:02:59.487 LIB libspdk_event_vhost_blk.a 00:02:59.746 LIB libspdk_event_sock.a 00:02:59.746 SO libspdk_event_keyring.so.1.0 00:02:59.746 SO libspdk_event_vhost_blk.so.3.0 00:02:59.746 SO libspdk_event_scheduler.so.4.0 00:02:59.746 SO libspdk_event_sock.so.5.0 00:02:59.746 LIB libspdk_event_vmd.a 00:02:59.746 LIB libspdk_event_iobuf.a 00:02:59.746 SO libspdk_event_vmd.so.6.0 00:02:59.746 LIB libspdk_event_fsdev.a 00:02:59.746 SYMLINK libspdk_event_vhost_blk.so 00:02:59.746 SYMLINK libspdk_event_scheduler.so 00:02:59.746 SO libspdk_event_iobuf.so.3.0 00:02:59.746 SYMLINK libspdk_event_keyring.so 00:02:59.746 SYMLINK libspdk_event_sock.so 00:02:59.746 SO libspdk_event_fsdev.so.1.0 00:02:59.746 SYMLINK libspdk_event_vmd.so 00:02:59.746 SYMLINK libspdk_event_iobuf.so 00:02:59.746 SYMLINK libspdk_event_fsdev.so 00:03:00.005 CC module/event/subsystems/accel/accel.o 00:03:00.005 LIB libspdk_event_accel.a 00:03:00.005 SO libspdk_event_accel.so.6.0 00:03:00.263 SYMLINK libspdk_event_accel.so 00:03:00.522 CC module/event/subsystems/bdev/bdev.o 00:03:00.522 LIB libspdk_event_bdev.a 00:03:00.522 SO libspdk_event_bdev.so.6.0 00:03:00.522 SYMLINK libspdk_event_bdev.so 00:03:00.780 CC module/event/subsystems/nbd/nbd.o 00:03:00.780 CC module/event/subsystems/ublk/ublk.o 00:03:00.780 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:00.780 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:00.780 CC module/event/subsystems/scsi/scsi.o 00:03:00.780 LIB libspdk_event_nbd.a 00:03:01.051 LIB libspdk_event_scsi.a 00:03:01.051 LIB libspdk_event_ublk.a 00:03:01.051 SO libspdk_event_nbd.so.6.0 00:03:01.051 SO libspdk_event_scsi.so.6.0 00:03:01.051 SO libspdk_event_ublk.so.3.0 00:03:01.051 SYMLINK libspdk_event_nbd.so 00:03:01.051 LIB libspdk_event_nvmf.a 00:03:01.051 SYMLINK libspdk_event_ublk.so 00:03:01.051 SYMLINK libspdk_event_scsi.so 00:03:01.051 SO libspdk_event_nvmf.so.6.0 00:03:01.051 SYMLINK libspdk_event_nvmf.so 00:03:01.307 CC module/event/subsystems/iscsi/iscsi.o 00:03:01.307 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:01.307 LIB libspdk_event_vhost_scsi.a 00:03:01.307 LIB libspdk_event_iscsi.a 00:03:01.307 SO libspdk_event_vhost_scsi.so.3.0 00:03:01.307 SO libspdk_event_iscsi.so.6.0 00:03:01.307 SYMLINK libspdk_event_vhost_scsi.so 00:03:01.307 SYMLINK libspdk_event_iscsi.so 00:03:01.570 SO libspdk.so.6.0 00:03:01.570 SYMLINK libspdk.so 00:03:01.828 CC test/rpc_client/rpc_client_test.o 00:03:01.828 TEST_HEADER include/spdk/accel.h 00:03:01.828 TEST_HEADER include/spdk/accel_module.h 00:03:01.828 TEST_HEADER include/spdk/assert.h 00:03:01.828 TEST_HEADER include/spdk/barrier.h 00:03:01.828 TEST_HEADER include/spdk/base64.h 00:03:01.828 TEST_HEADER include/spdk/bdev.h 00:03:01.828 CXX app/trace/trace.o 00:03:01.828 TEST_HEADER include/spdk/bdev_module.h 00:03:01.828 TEST_HEADER include/spdk/bdev_zone.h 00:03:01.828 TEST_HEADER include/spdk/bit_array.h 00:03:01.828 TEST_HEADER include/spdk/bit_pool.h 00:03:01.828 TEST_HEADER include/spdk/blob_bdev.h 00:03:01.828 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:01.828 TEST_HEADER include/spdk/blobfs.h 00:03:01.828 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:01.828 TEST_HEADER include/spdk/blob.h 00:03:01.828 TEST_HEADER include/spdk/conf.h 00:03:01.828 TEST_HEADER include/spdk/config.h 00:03:01.828 TEST_HEADER include/spdk/cpuset.h 00:03:01.828 TEST_HEADER include/spdk/crc16.h 00:03:01.828 TEST_HEADER include/spdk/crc32.h 00:03:01.828 TEST_HEADER include/spdk/crc64.h 00:03:01.828 TEST_HEADER include/spdk/dif.h 00:03:01.828 TEST_HEADER include/spdk/dma.h 00:03:01.828 TEST_HEADER include/spdk/endian.h 00:03:01.828 TEST_HEADER include/spdk/env_dpdk.h 00:03:01.828 TEST_HEADER include/spdk/env.h 00:03:01.828 TEST_HEADER include/spdk/event.h 00:03:01.828 TEST_HEADER include/spdk/fd_group.h 00:03:01.828 TEST_HEADER include/spdk/fd.h 00:03:01.828 TEST_HEADER include/spdk/file.h 00:03:01.828 TEST_HEADER include/spdk/fsdev.h 00:03:01.828 TEST_HEADER include/spdk/fsdev_module.h 00:03:01.828 TEST_HEADER include/spdk/ftl.h 00:03:01.828 TEST_HEADER include/spdk/gpt_spec.h 00:03:01.828 TEST_HEADER include/spdk/hexlify.h 00:03:01.828 TEST_HEADER include/spdk/histogram_data.h 00:03:01.828 TEST_HEADER include/spdk/idxd.h 00:03:01.828 TEST_HEADER include/spdk/idxd_spec.h 00:03:01.828 TEST_HEADER include/spdk/init.h 00:03:01.828 TEST_HEADER include/spdk/ioat.h 00:03:01.828 TEST_HEADER include/spdk/ioat_spec.h 00:03:01.828 TEST_HEADER include/spdk/iscsi_spec.h 00:03:01.828 CC examples/util/zipf/zipf.o 00:03:01.828 TEST_HEADER include/spdk/json.h 00:03:01.828 TEST_HEADER include/spdk/jsonrpc.h 00:03:01.828 TEST_HEADER include/spdk/keyring.h 00:03:01.828 TEST_HEADER include/spdk/keyring_module.h 00:03:01.828 CC examples/ioat/perf/perf.o 00:03:01.828 TEST_HEADER include/spdk/likely.h 00:03:01.828 TEST_HEADER include/spdk/log.h 00:03:01.828 TEST_HEADER include/spdk/lvol.h 00:03:01.828 TEST_HEADER include/spdk/md5.h 00:03:01.828 TEST_HEADER include/spdk/memory.h 00:03:01.828 CC test/thread/poller_perf/poller_perf.o 00:03:01.828 TEST_HEADER include/spdk/mmio.h 00:03:01.828 TEST_HEADER include/spdk/nbd.h 00:03:01.828 TEST_HEADER include/spdk/net.h 00:03:01.828 TEST_HEADER include/spdk/notify.h 00:03:01.828 TEST_HEADER include/spdk/nvme.h 00:03:01.828 TEST_HEADER include/spdk/nvme_intel.h 00:03:01.828 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:01.828 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:01.828 TEST_HEADER include/spdk/nvme_spec.h 00:03:01.828 TEST_HEADER include/spdk/nvme_zns.h 00:03:01.828 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:01.828 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:01.828 TEST_HEADER include/spdk/nvmf.h 00:03:01.828 TEST_HEADER include/spdk/nvmf_spec.h 00:03:01.828 TEST_HEADER include/spdk/nvmf_transport.h 00:03:01.828 CC test/app/bdev_svc/bdev_svc.o 00:03:01.828 TEST_HEADER include/spdk/opal.h 00:03:01.828 CC test/dma/test_dma/test_dma.o 00:03:01.828 TEST_HEADER include/spdk/opal_spec.h 00:03:01.828 TEST_HEADER include/spdk/pci_ids.h 00:03:01.828 TEST_HEADER include/spdk/pipe.h 00:03:01.828 TEST_HEADER include/spdk/queue.h 00:03:01.828 TEST_HEADER include/spdk/reduce.h 00:03:01.828 TEST_HEADER include/spdk/rpc.h 00:03:01.828 TEST_HEADER include/spdk/scheduler.h 00:03:01.828 TEST_HEADER include/spdk/scsi.h 00:03:01.828 TEST_HEADER include/spdk/scsi_spec.h 00:03:01.828 TEST_HEADER include/spdk/sock.h 00:03:01.828 TEST_HEADER include/spdk/stdinc.h 00:03:01.828 TEST_HEADER include/spdk/string.h 00:03:01.828 TEST_HEADER include/spdk/thread.h 00:03:01.828 TEST_HEADER include/spdk/trace.h 00:03:01.828 TEST_HEADER include/spdk/trace_parser.h 00:03:01.828 TEST_HEADER include/spdk/tree.h 00:03:01.828 LINK rpc_client_test 00:03:01.828 TEST_HEADER include/spdk/ublk.h 00:03:01.828 TEST_HEADER include/spdk/util.h 00:03:01.828 TEST_HEADER include/spdk/uuid.h 00:03:01.828 TEST_HEADER include/spdk/version.h 00:03:01.828 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:01.828 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:01.828 TEST_HEADER include/spdk/vhost.h 00:03:01.828 TEST_HEADER include/spdk/vmd.h 00:03:01.828 TEST_HEADER include/spdk/xor.h 00:03:01.828 TEST_HEADER include/spdk/zipf.h 00:03:01.828 CXX test/cpp_headers/accel.o 00:03:01.828 CC test/env/mem_callbacks/mem_callbacks.o 00:03:01.828 LINK interrupt_tgt 00:03:01.828 LINK zipf 00:03:01.828 LINK poller_perf 00:03:02.086 LINK ioat_perf 00:03:02.086 LINK bdev_svc 00:03:02.086 CXX test/cpp_headers/accel_module.o 00:03:02.086 CC test/env/vtophys/vtophys.o 00:03:02.086 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:02.086 CC test/app/histogram_perf/histogram_perf.o 00:03:02.086 CC examples/ioat/verify/verify.o 00:03:02.086 CXX test/cpp_headers/assert.o 00:03:02.086 LINK spdk_trace 00:03:02.086 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:02.086 LINK vtophys 00:03:02.086 LINK env_dpdk_post_init 00:03:02.344 LINK test_dma 00:03:02.344 LINK histogram_perf 00:03:02.344 CXX test/cpp_headers/barrier.o 00:03:02.344 CXX test/cpp_headers/base64.o 00:03:02.344 LINK verify 00:03:02.344 CC test/event/event_perf/event_perf.o 00:03:02.344 LINK mem_callbacks 00:03:02.344 CC app/trace_record/trace_record.o 00:03:02.344 CXX test/cpp_headers/bdev.o 00:03:02.344 CC app/nvmf_tgt/nvmf_main.o 00:03:02.344 CC test/env/memory/memory_ut.o 00:03:02.344 CC test/env/pci/pci_ut.o 00:03:02.602 CXX test/cpp_headers/bdev_module.o 00:03:02.602 CC app/iscsi_tgt/iscsi_tgt.o 00:03:02.602 LINK event_perf 00:03:02.602 LINK nvme_fuzz 00:03:02.602 LINK nvmf_tgt 00:03:02.602 CC examples/thread/thread/thread_ex.o 00:03:02.602 LINK spdk_trace_record 00:03:02.602 CXX test/cpp_headers/bdev_zone.o 00:03:02.602 LINK iscsi_tgt 00:03:02.602 CC test/event/reactor/reactor.o 00:03:02.602 CC examples/sock/hello_world/hello_sock.o 00:03:02.861 LINK pci_ut 00:03:02.861 CXX test/cpp_headers/bit_array.o 00:03:02.861 LINK reactor 00:03:02.861 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:02.861 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:02.861 LINK thread 00:03:02.861 CXX test/cpp_headers/bit_pool.o 00:03:02.861 CC app/spdk_tgt/spdk_tgt.o 00:03:02.861 CC test/accel/dif/dif.o 00:03:02.861 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:02.861 LINK hello_sock 00:03:03.120 CC test/event/reactor_perf/reactor_perf.o 00:03:03.120 CXX test/cpp_headers/blob_bdev.o 00:03:03.120 LINK reactor_perf 00:03:03.120 LINK spdk_tgt 00:03:03.120 CXX test/cpp_headers/blobfs_bdev.o 00:03:03.120 CC test/blobfs/mkfs/mkfs.o 00:03:03.120 CC examples/vmd/lsvmd/lsvmd.o 00:03:03.379 CC test/lvol/esnap/esnap.o 00:03:03.379 LINK memory_ut 00:03:03.379 LINK vhost_fuzz 00:03:03.379 CXX test/cpp_headers/blobfs.o 00:03:03.379 CC test/event/app_repeat/app_repeat.o 00:03:03.379 LINK lsvmd 00:03:03.379 CC app/spdk_lspci/spdk_lspci.o 00:03:03.379 LINK mkfs 00:03:03.379 CXX test/cpp_headers/blob.o 00:03:03.379 LINK spdk_lspci 00:03:03.379 LINK app_repeat 00:03:03.637 CXX test/cpp_headers/conf.o 00:03:03.637 CC app/spdk_nvme_identify/identify.o 00:03:03.637 CC app/spdk_nvme_perf/perf.o 00:03:03.637 CC examples/vmd/led/led.o 00:03:03.637 LINK dif 00:03:03.637 CC app/spdk_nvme_discover/discovery_aer.o 00:03:03.637 CXX test/cpp_headers/config.o 00:03:03.637 CXX test/cpp_headers/cpuset.o 00:03:03.637 LINK led 00:03:03.637 CC test/event/scheduler/scheduler.o 00:03:03.637 CC test/nvme/aer/aer.o 00:03:03.895 CXX test/cpp_headers/crc16.o 00:03:03.895 LINK spdk_nvme_discover 00:03:03.895 CXX test/cpp_headers/crc32.o 00:03:03.895 LINK scheduler 00:03:03.895 CC examples/idxd/perf/perf.o 00:03:03.895 CXX test/cpp_headers/crc64.o 00:03:03.895 CC test/bdev/bdevio/bdevio.o 00:03:04.153 LINK aer 00:03:04.153 CXX test/cpp_headers/dif.o 00:03:04.153 CC test/nvme/reset/reset.o 00:03:04.153 CXX test/cpp_headers/dma.o 00:03:04.153 LINK iscsi_fuzz 00:03:04.153 CC test/nvme/sgl/sgl.o 00:03:04.153 CXX test/cpp_headers/endian.o 00:03:04.153 LINK spdk_nvme_perf 00:03:04.153 LINK idxd_perf 00:03:04.411 CXX test/cpp_headers/env_dpdk.o 00:03:04.411 LINK spdk_nvme_identify 00:03:04.411 LINK reset 00:03:04.411 CC test/nvme/e2edp/nvme_dp.o 00:03:04.411 LINK bdevio 00:03:04.411 CC test/app/jsoncat/jsoncat.o 00:03:04.411 CC app/spdk_top/spdk_top.o 00:03:04.411 CXX test/cpp_headers/env.o 00:03:04.411 LINK sgl 00:03:04.411 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:04.669 LINK jsoncat 00:03:04.669 LINK nvme_dp 00:03:04.669 CXX test/cpp_headers/event.o 00:03:04.669 CC examples/accel/perf/accel_perf.o 00:03:04.669 CC test/nvme/overhead/overhead.o 00:03:04.669 CC examples/blob/hello_world/hello_blob.o 00:03:04.669 CC examples/nvme/hello_world/hello_world.o 00:03:04.669 CXX test/cpp_headers/fd_group.o 00:03:04.669 LINK hello_fsdev 00:03:04.669 CC test/app/stub/stub.o 00:03:04.927 CC examples/nvme/reconnect/reconnect.o 00:03:04.927 LINK overhead 00:03:04.927 LINK hello_blob 00:03:04.927 CXX test/cpp_headers/fd.o 00:03:04.927 LINK hello_world 00:03:04.927 LINK stub 00:03:04.927 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:04.927 CXX test/cpp_headers/file.o 00:03:04.927 LINK accel_perf 00:03:05.185 CXX test/cpp_headers/fsdev.o 00:03:05.185 CC test/nvme/err_injection/err_injection.o 00:03:05.185 CXX test/cpp_headers/fsdev_module.o 00:03:05.185 CC examples/blob/cli/blobcli.o 00:03:05.185 LINK reconnect 00:03:05.185 LINK spdk_top 00:03:05.186 CXX test/cpp_headers/ftl.o 00:03:05.186 LINK err_injection 00:03:05.186 CC test/nvme/startup/startup.o 00:03:05.186 CC test/nvme/reserve/reserve.o 00:03:05.186 CC test/nvme/simple_copy/simple_copy.o 00:03:05.444 CXX test/cpp_headers/gpt_spec.o 00:03:05.444 CXX test/cpp_headers/hexlify.o 00:03:05.444 LINK nvme_manage 00:03:05.444 CC app/vhost/vhost.o 00:03:05.444 LINK startup 00:03:05.444 CXX test/cpp_headers/histogram_data.o 00:03:05.444 LINK simple_copy 00:03:05.444 LINK reserve 00:03:05.444 LINK blobcli 00:03:05.444 LINK vhost 00:03:05.444 CXX test/cpp_headers/idxd.o 00:03:05.702 CC examples/bdev/hello_world/hello_bdev.o 00:03:05.702 CC examples/nvme/arbitration/arbitration.o 00:03:05.702 CC test/nvme/connect_stress/connect_stress.o 00:03:05.702 CC examples/bdev/bdevperf/bdevperf.o 00:03:05.702 CC examples/nvme/hotplug/hotplug.o 00:03:05.702 CXX test/cpp_headers/idxd_spec.o 00:03:05.702 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:05.702 LINK hello_bdev 00:03:05.702 LINK connect_stress 00:03:05.702 CC examples/nvme/abort/abort.o 00:03:05.702 CXX test/cpp_headers/init.o 00:03:05.960 CC app/spdk_dd/spdk_dd.o 00:03:05.960 LINK cmb_copy 00:03:05.960 LINK hotplug 00:03:05.960 CXX test/cpp_headers/ioat.o 00:03:05.960 LINK arbitration 00:03:05.960 CXX test/cpp_headers/ioat_spec.o 00:03:05.960 CC test/nvme/boot_partition/boot_partition.o 00:03:05.960 CXX test/cpp_headers/iscsi_spec.o 00:03:05.960 CC test/nvme/compliance/nvme_compliance.o 00:03:05.960 CXX test/cpp_headers/json.o 00:03:05.960 CC test/nvme/fused_ordering/fused_ordering.o 00:03:06.218 LINK boot_partition 00:03:06.218 LINK abort 00:03:06.218 CXX test/cpp_headers/jsonrpc.o 00:03:06.218 CXX test/cpp_headers/keyring.o 00:03:06.218 LINK spdk_dd 00:03:06.218 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:06.218 CXX test/cpp_headers/keyring_module.o 00:03:06.218 LINK fused_ordering 00:03:06.218 CXX test/cpp_headers/likely.o 00:03:06.476 LINK nvme_compliance 00:03:06.476 LINK bdevperf 00:03:06.476 CXX test/cpp_headers/log.o 00:03:06.476 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:06.476 LINK doorbell_aers 00:03:06.476 CC test/nvme/fdp/fdp.o 00:03:06.476 CXX test/cpp_headers/lvol.o 00:03:06.476 CXX test/cpp_headers/md5.o 00:03:06.476 CXX test/cpp_headers/memory.o 00:03:06.476 CC app/fio/nvme/fio_plugin.o 00:03:06.476 CXX test/cpp_headers/mmio.o 00:03:06.476 CXX test/cpp_headers/nbd.o 00:03:06.476 LINK pmr_persistence 00:03:06.476 CXX test/cpp_headers/net.o 00:03:06.734 CC app/fio/bdev/fio_plugin.o 00:03:06.734 CC test/nvme/cuse/cuse.o 00:03:06.734 CXX test/cpp_headers/notify.o 00:03:06.734 CXX test/cpp_headers/nvme.o 00:03:06.734 CXX test/cpp_headers/nvme_intel.o 00:03:06.734 CXX test/cpp_headers/nvme_ocssd.o 00:03:06.734 LINK fdp 00:03:06.734 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:06.734 CXX test/cpp_headers/nvme_spec.o 00:03:06.734 CXX test/cpp_headers/nvme_zns.o 00:03:06.734 CXX test/cpp_headers/nvmf_cmd.o 00:03:06.734 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:06.993 CC examples/nvmf/nvmf/nvmf.o 00:03:06.993 CXX test/cpp_headers/nvmf.o 00:03:06.993 CXX test/cpp_headers/nvmf_spec.o 00:03:06.993 CXX test/cpp_headers/nvmf_transport.o 00:03:06.993 CXX test/cpp_headers/opal.o 00:03:06.993 CXX test/cpp_headers/opal_spec.o 00:03:06.993 CXX test/cpp_headers/pci_ids.o 00:03:06.993 LINK spdk_nvme 00:03:06.993 CXX test/cpp_headers/pipe.o 00:03:06.993 LINK spdk_bdev 00:03:06.993 CXX test/cpp_headers/queue.o 00:03:06.993 LINK nvmf 00:03:07.251 CXX test/cpp_headers/reduce.o 00:03:07.251 CXX test/cpp_headers/rpc.o 00:03:07.251 CXX test/cpp_headers/scheduler.o 00:03:07.251 CXX test/cpp_headers/scsi.o 00:03:07.251 CXX test/cpp_headers/scsi_spec.o 00:03:07.251 CXX test/cpp_headers/sock.o 00:03:07.251 CXX test/cpp_headers/stdinc.o 00:03:07.251 CXX test/cpp_headers/string.o 00:03:07.251 CXX test/cpp_headers/thread.o 00:03:07.251 CXX test/cpp_headers/trace.o 00:03:07.251 CXX test/cpp_headers/trace_parser.o 00:03:07.251 CXX test/cpp_headers/tree.o 00:03:07.251 CXX test/cpp_headers/ublk.o 00:03:07.251 CXX test/cpp_headers/util.o 00:03:07.251 CXX test/cpp_headers/uuid.o 00:03:07.251 CXX test/cpp_headers/version.o 00:03:07.251 CXX test/cpp_headers/vfio_user_pci.o 00:03:07.251 CXX test/cpp_headers/vfio_user_spec.o 00:03:07.251 CXX test/cpp_headers/vhost.o 00:03:07.251 CXX test/cpp_headers/vmd.o 00:03:07.509 CXX test/cpp_headers/xor.o 00:03:07.509 CXX test/cpp_headers/zipf.o 00:03:07.769 LINK cuse 00:03:08.336 LINK esnap 00:03:08.595 00:03:08.595 real 1m6.781s 00:03:08.595 user 6m12.561s 00:03:08.595 sys 1m5.932s 00:03:08.595 20:13:52 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:08.595 20:13:52 make -- common/autotest_common.sh@10 -- $ set +x 00:03:08.595 ************************************ 00:03:08.595 END TEST make 00:03:08.595 ************************************ 00:03:08.595 20:13:52 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:08.595 20:13:52 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:08.595 20:13:52 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:08.595 20:13:52 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:08.595 20:13:52 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:08.595 20:13:52 -- pm/common@44 -- $ pid=5076 00:03:08.595 20:13:52 -- pm/common@50 -- $ kill -TERM 5076 00:03:08.595 20:13:52 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:08.595 20:13:52 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:08.595 20:13:52 -- pm/common@44 -- $ pid=5077 00:03:08.595 20:13:52 -- pm/common@50 -- $ kill -TERM 5077 00:03:08.595 20:13:52 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:08.595 20:13:52 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:08.853 20:13:52 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:08.853 20:13:52 -- common/autotest_common.sh@1711 -- # lcov --version 00:03:08.853 20:13:52 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:08.853 20:13:52 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:08.853 20:13:52 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:08.853 20:13:52 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:08.853 20:13:52 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:08.853 20:13:52 -- scripts/common.sh@336 -- # IFS=.-: 00:03:08.853 20:13:52 -- scripts/common.sh@336 -- # read -ra ver1 00:03:08.853 20:13:52 -- scripts/common.sh@337 -- # IFS=.-: 00:03:08.853 20:13:52 -- scripts/common.sh@337 -- # read -ra ver2 00:03:08.853 20:13:52 -- scripts/common.sh@338 -- # local 'op=<' 00:03:08.853 20:13:52 -- scripts/common.sh@340 -- # ver1_l=2 00:03:08.853 20:13:52 -- scripts/common.sh@341 -- # ver2_l=1 00:03:08.853 20:13:52 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:08.853 20:13:52 -- scripts/common.sh@344 -- # case "$op" in 00:03:08.853 20:13:52 -- scripts/common.sh@345 -- # : 1 00:03:08.853 20:13:52 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:08.853 20:13:52 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:08.853 20:13:52 -- scripts/common.sh@365 -- # decimal 1 00:03:08.853 20:13:52 -- scripts/common.sh@353 -- # local d=1 00:03:08.853 20:13:52 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:08.853 20:13:52 -- scripts/common.sh@355 -- # echo 1 00:03:08.853 20:13:52 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:08.853 20:13:52 -- scripts/common.sh@366 -- # decimal 2 00:03:08.853 20:13:52 -- scripts/common.sh@353 -- # local d=2 00:03:08.853 20:13:52 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:08.853 20:13:52 -- scripts/common.sh@355 -- # echo 2 00:03:08.853 20:13:52 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:08.853 20:13:52 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:08.853 20:13:52 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:08.853 20:13:52 -- scripts/common.sh@368 -- # return 0 00:03:08.853 20:13:52 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:08.853 20:13:52 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:08.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:08.853 --rc genhtml_branch_coverage=1 00:03:08.853 --rc genhtml_function_coverage=1 00:03:08.853 --rc genhtml_legend=1 00:03:08.853 --rc geninfo_all_blocks=1 00:03:08.853 --rc geninfo_unexecuted_blocks=1 00:03:08.853 00:03:08.853 ' 00:03:08.853 20:13:52 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:08.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:08.853 --rc genhtml_branch_coverage=1 00:03:08.853 --rc genhtml_function_coverage=1 00:03:08.853 --rc genhtml_legend=1 00:03:08.853 --rc geninfo_all_blocks=1 00:03:08.853 --rc geninfo_unexecuted_blocks=1 00:03:08.853 00:03:08.853 ' 00:03:08.853 20:13:52 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:08.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:08.853 --rc genhtml_branch_coverage=1 00:03:08.853 --rc genhtml_function_coverage=1 00:03:08.853 --rc genhtml_legend=1 00:03:08.853 --rc geninfo_all_blocks=1 00:03:08.853 --rc geninfo_unexecuted_blocks=1 00:03:08.853 00:03:08.853 ' 00:03:08.853 20:13:52 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:08.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:08.853 --rc genhtml_branch_coverage=1 00:03:08.854 --rc genhtml_function_coverage=1 00:03:08.854 --rc genhtml_legend=1 00:03:08.854 --rc geninfo_all_blocks=1 00:03:08.854 --rc geninfo_unexecuted_blocks=1 00:03:08.854 00:03:08.854 ' 00:03:08.854 20:13:52 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:08.854 20:13:52 -- nvmf/common.sh@7 -- # uname -s 00:03:08.854 20:13:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:08.854 20:13:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:08.854 20:13:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:08.854 20:13:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:08.854 20:13:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:08.854 20:13:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:08.854 20:13:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:08.854 20:13:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:08.854 20:13:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:08.854 20:13:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:08.854 20:13:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:df74ab9f-c50a-47a3-a4fc-d710e0af4003 00:03:08.854 20:13:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=df74ab9f-c50a-47a3-a4fc-d710e0af4003 00:03:08.854 20:13:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:08.854 20:13:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:08.854 20:13:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:08.854 20:13:52 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:08.854 20:13:52 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:08.854 20:13:52 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:08.854 20:13:52 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:08.854 20:13:52 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:08.854 20:13:52 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:08.854 20:13:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:08.854 20:13:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:08.854 20:13:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:08.854 20:13:52 -- paths/export.sh@5 -- # export PATH 00:03:08.854 20:13:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:08.854 20:13:52 -- nvmf/common.sh@51 -- # : 0 00:03:08.854 20:13:52 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:08.854 20:13:52 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:08.854 20:13:52 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:08.854 20:13:52 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:08.854 20:13:52 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:08.854 20:13:52 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:08.854 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:08.854 20:13:52 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:08.854 20:13:52 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:08.854 20:13:52 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:08.854 20:13:52 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:08.854 20:13:52 -- spdk/autotest.sh@32 -- # uname -s 00:03:08.854 20:13:52 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:08.854 20:13:52 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:08.854 20:13:52 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:08.854 20:13:52 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:08.854 20:13:52 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:08.854 20:13:52 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:08.854 20:13:53 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:08.854 20:13:53 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:08.854 20:13:53 -- spdk/autotest.sh@48 -- # udevadm_pid=56055 00:03:08.854 20:13:53 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:08.854 20:13:53 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:08.854 20:13:53 -- pm/common@17 -- # local monitor 00:03:08.854 20:13:53 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:08.854 20:13:53 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:08.854 20:13:53 -- pm/common@25 -- # sleep 1 00:03:08.854 20:13:53 -- pm/common@21 -- # date +%s 00:03:08.854 20:13:53 -- pm/common@21 -- # date +%s 00:03:08.854 20:13:53 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1734034433 00:03:08.854 20:13:53 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1734034433 00:03:08.854 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1734034433_collect-vmstat.pm.log 00:03:08.854 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1734034433_collect-cpu-load.pm.log 00:03:10.227 20:13:54 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:10.227 20:13:54 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:10.227 20:13:54 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:10.227 20:13:54 -- common/autotest_common.sh@10 -- # set +x 00:03:10.227 20:13:54 -- spdk/autotest.sh@59 -- # create_test_list 00:03:10.227 20:13:54 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:10.227 20:13:54 -- common/autotest_common.sh@10 -- # set +x 00:03:10.227 20:13:54 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:10.227 20:13:54 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:10.227 20:13:54 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:10.227 20:13:54 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:10.227 20:13:54 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:10.227 20:13:54 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:10.227 20:13:54 -- common/autotest_common.sh@1457 -- # uname 00:03:10.227 20:13:54 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:10.227 20:13:54 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:10.227 20:13:54 -- common/autotest_common.sh@1477 -- # uname 00:03:10.227 20:13:54 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:10.227 20:13:54 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:10.227 20:13:54 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:10.227 lcov: LCOV version 1.15 00:03:10.227 20:13:54 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:25.107 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:25.107 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:03:40.007 20:14:23 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:40.007 20:14:23 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:40.007 20:14:23 -- common/autotest_common.sh@10 -- # set +x 00:03:40.007 20:14:23 -- spdk/autotest.sh@78 -- # rm -f 00:03:40.007 20:14:23 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:40.007 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:40.268 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:03:40.268 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:03:40.268 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:03:40.268 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:03:40.268 20:14:24 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:40.268 20:14:24 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:40.268 20:14:24 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:40.268 20:14:24 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:03:40.268 20:14:24 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:03:40.268 20:14:24 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:03:40.268 20:14:24 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:03:40.268 20:14:24 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:03:40.268 20:14:24 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:40.268 20:14:24 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:03:40.268 20:14:24 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:40.268 20:14:24 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:40.268 20:14:24 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:40.268 20:14:24 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:03:40.268 20:14:24 -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:03:40.268 20:14:24 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:40.268 20:14:24 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1c1n1 00:03:40.268 20:14:24 -- common/autotest_common.sh@1650 -- # local device=nvme1c1n1 00:03:40.268 20:14:24 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1c1n1/queue/zoned ]] 00:03:40.268 20:14:24 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:40.268 20:14:24 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:03:40.268 20:14:24 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:03:40.268 20:14:24 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:40.268 20:14:24 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:03:40.268 20:14:24 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:03:40.268 20:14:24 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:03:40.268 20:14:24 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:40.268 20:14:24 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:03:40.268 20:14:24 -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:03:40.268 20:14:24 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:40.268 20:14:24 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3n1 00:03:40.268 20:14:24 -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:03:40.268 20:14:24 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:03:40.268 20:14:24 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:40.268 20:14:24 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:40.530 20:14:24 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3n2 00:03:40.530 20:14:24 -- common/autotest_common.sh@1650 -- # local device=nvme3n2 00:03:40.530 20:14:24 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n2/queue/zoned ]] 00:03:40.530 20:14:24 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:40.530 20:14:24 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:40.530 20:14:24 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3n3 00:03:40.530 20:14:24 -- common/autotest_common.sh@1650 -- # local device=nvme3n3 00:03:40.530 20:14:24 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n3/queue/zoned ]] 00:03:40.530 20:14:24 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:40.530 20:14:24 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:40.530 20:14:24 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:40.530 20:14:24 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:40.530 20:14:24 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:40.530 20:14:24 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:40.530 20:14:24 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:40.530 No valid GPT data, bailing 00:03:40.530 20:14:24 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:40.530 20:14:24 -- scripts/common.sh@394 -- # pt= 00:03:40.530 20:14:24 -- scripts/common.sh@395 -- # return 1 00:03:40.530 20:14:24 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:40.530 1+0 records in 00:03:40.530 1+0 records out 00:03:40.530 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.028695 s, 36.5 MB/s 00:03:40.530 20:14:24 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:40.530 20:14:24 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:40.530 20:14:24 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:03:40.530 20:14:24 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:03:40.530 20:14:24 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:40.530 No valid GPT data, bailing 00:03:40.530 20:14:24 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:40.530 20:14:24 -- scripts/common.sh@394 -- # pt= 00:03:40.530 20:14:24 -- scripts/common.sh@395 -- # return 1 00:03:40.530 20:14:24 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:40.530 1+0 records in 00:03:40.530 1+0 records out 00:03:40.530 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00598172 s, 175 MB/s 00:03:40.530 20:14:24 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:40.530 20:14:24 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:40.530 20:14:24 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:03:40.530 20:14:24 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:03:40.530 20:14:24 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:03:40.530 No valid GPT data, bailing 00:03:40.791 20:14:24 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:03:40.791 20:14:24 -- scripts/common.sh@394 -- # pt= 00:03:40.791 20:14:24 -- scripts/common.sh@395 -- # return 1 00:03:40.791 20:14:24 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:03:40.791 1+0 records in 00:03:40.791 1+0 records out 00:03:40.791 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00584174 s, 179 MB/s 00:03:40.791 20:14:24 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:40.791 20:14:24 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:40.791 20:14:24 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:03:40.791 20:14:24 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:03:40.791 20:14:24 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:03:40.791 No valid GPT data, bailing 00:03:40.791 20:14:24 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:03:40.791 20:14:24 -- scripts/common.sh@394 -- # pt= 00:03:40.791 20:14:24 -- scripts/common.sh@395 -- # return 1 00:03:40.791 20:14:24 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:03:40.791 1+0 records in 00:03:40.791 1+0 records out 00:03:40.791 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00621706 s, 169 MB/s 00:03:40.791 20:14:24 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:40.791 20:14:24 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:40.791 20:14:24 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n2 00:03:40.791 20:14:24 -- scripts/common.sh@381 -- # local block=/dev/nvme3n2 pt 00:03:40.791 20:14:24 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n2 00:03:40.791 No valid GPT data, bailing 00:03:40.791 20:14:24 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n2 00:03:40.791 20:14:24 -- scripts/common.sh@394 -- # pt= 00:03:40.791 20:14:24 -- scripts/common.sh@395 -- # return 1 00:03:40.791 20:14:24 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n2 bs=1M count=1 00:03:40.791 1+0 records in 00:03:40.791 1+0 records out 00:03:40.791 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00584858 s, 179 MB/s 00:03:40.791 20:14:24 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:40.791 20:14:24 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:40.791 20:14:24 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n3 00:03:40.791 20:14:24 -- scripts/common.sh@381 -- # local block=/dev/nvme3n3 pt 00:03:40.791 20:14:24 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n3 00:03:40.791 No valid GPT data, bailing 00:03:41.055 20:14:25 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n3 00:03:41.055 20:14:25 -- scripts/common.sh@394 -- # pt= 00:03:41.055 20:14:25 -- scripts/common.sh@395 -- # return 1 00:03:41.055 20:14:25 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n3 bs=1M count=1 00:03:41.055 1+0 records in 00:03:41.055 1+0 records out 00:03:41.055 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00618461 s, 170 MB/s 00:03:41.055 20:14:25 -- spdk/autotest.sh@105 -- # sync 00:03:41.055 20:14:25 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:41.055 20:14:25 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:41.055 20:14:25 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:42.994 20:14:26 -- spdk/autotest.sh@111 -- # uname -s 00:03:42.994 20:14:26 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:42.994 20:14:26 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:42.994 20:14:26 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:43.289 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:43.549 Hugepages 00:03:43.549 node hugesize free / total 00:03:43.549 node0 1048576kB 0 / 0 00:03:43.549 node0 2048kB 0 / 0 00:03:43.550 00:03:43.550 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:43.811 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:43.811 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:03:43.811 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:03:43.811 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme3 nvme3n1 nvme3n2 nvme3n3 00:03:44.072 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:03:44.072 20:14:28 -- spdk/autotest.sh@117 -- # uname -s 00:03:44.072 20:14:28 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:44.072 20:14:28 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:44.072 20:14:28 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:44.333 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:44.904 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:44.904 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:44.904 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:03:45.165 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:03:45.165 20:14:29 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:46.130 20:14:30 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:46.130 20:14:30 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:46.130 20:14:30 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:46.130 20:14:30 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:46.130 20:14:30 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:46.130 20:14:30 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:46.130 20:14:30 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:46.130 20:14:30 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:03:46.130 20:14:30 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:46.130 20:14:30 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:03:46.130 20:14:30 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:03:46.130 20:14:30 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:46.391 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:46.653 Waiting for block devices as requested 00:03:46.653 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:03:46.653 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:03:46.914 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:03:46.914 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:03:52.201 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:03:52.201 20:14:36 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:52.201 20:14:36 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:03:52.201 20:14:36 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:03:52.201 20:14:36 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:03:52.201 20:14:36 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:03:52.201 20:14:36 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:03:52.201 20:14:36 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:03:52.201 20:14:36 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:03:52.201 20:14:36 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:03:52.201 20:14:36 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:03:52.201 20:14:36 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:03:52.201 20:14:36 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:52.201 20:14:36 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:52.201 20:14:36 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:03:52.201 20:14:36 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:52.201 20:14:36 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:52.201 20:14:36 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:52.201 20:14:36 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:52.201 20:14:36 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:03:52.201 20:14:36 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:52.201 20:14:36 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:52.201 20:14:36 -- common/autotest_common.sh@1543 -- # continue 00:03:52.201 20:14:36 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:52.201 20:14:36 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:03:52.201 20:14:36 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:03:52.201 20:14:36 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:03:52.201 20:14:36 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:03:52.201 20:14:36 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:03:52.201 20:14:36 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:03:52.201 20:14:36 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:03:52.201 20:14:36 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:03:52.201 20:14:36 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:03:52.201 20:14:36 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:03:52.201 20:14:36 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:52.201 20:14:36 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:52.201 20:14:36 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:03:52.201 20:14:36 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:52.201 20:14:36 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:52.202 20:14:36 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:52.202 20:14:36 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:52.202 20:14:36 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:52.202 20:14:36 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:52.202 20:14:36 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:52.202 20:14:36 -- common/autotest_common.sh@1543 -- # continue 00:03:52.202 20:14:36 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:52.202 20:14:36 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:03:52.202 20:14:36 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:03:52.202 20:14:36 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:03:52.202 20:14:36 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:03:52.202 20:14:36 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:03:52.202 20:14:36 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:03:52.202 20:14:36 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:03:52.202 20:14:36 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:03:52.202 20:14:36 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:03:52.202 20:14:36 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:52.202 20:14:36 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:52.202 20:14:36 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:03:52.202 20:14:36 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:03:52.202 20:14:36 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:52.202 20:14:36 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:52.202 20:14:36 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:52.202 20:14:36 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:03:52.202 20:14:36 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:52.202 20:14:36 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:52.202 20:14:36 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:52.202 20:14:36 -- common/autotest_common.sh@1543 -- # continue 00:03:52.202 20:14:36 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:52.202 20:14:36 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:03:52.202 20:14:36 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:03:52.202 20:14:36 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:03:52.202 20:14:36 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:03:52.202 20:14:36 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:03:52.202 20:14:36 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:03:52.202 20:14:36 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:03:52.202 20:14:36 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:03:52.202 20:14:36 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:03:52.202 20:14:36 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:03:52.202 20:14:36 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:52.202 20:14:36 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:52.202 20:14:36 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:03:52.202 20:14:36 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:52.202 20:14:36 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:52.202 20:14:36 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:03:52.202 20:14:36 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:52.202 20:14:36 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:52.202 20:14:36 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:52.202 20:14:36 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:52.202 20:14:36 -- common/autotest_common.sh@1543 -- # continue 00:03:52.202 20:14:36 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:52.202 20:14:36 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:52.202 20:14:36 -- common/autotest_common.sh@10 -- # set +x 00:03:52.202 20:14:36 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:52.202 20:14:36 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:52.202 20:14:36 -- common/autotest_common.sh@10 -- # set +x 00:03:52.202 20:14:36 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:52.773 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:53.345 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:53.345 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:03:53.345 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:53.345 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:03:53.345 20:14:37 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:53.345 20:14:37 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:53.345 20:14:37 -- common/autotest_common.sh@10 -- # set +x 00:03:53.345 20:14:37 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:53.345 20:14:37 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:03:53.345 20:14:37 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:03:53.345 20:14:37 -- common/autotest_common.sh@1563 -- # bdfs=() 00:03:53.345 20:14:37 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:03:53.345 20:14:37 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:03:53.345 20:14:37 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:03:53.345 20:14:37 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:03:53.345 20:14:37 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:53.345 20:14:37 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:53.345 20:14:37 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:53.345 20:14:37 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:03:53.345 20:14:37 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:53.607 20:14:37 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:03:53.607 20:14:37 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:03:53.607 20:14:37 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:53.607 20:14:37 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:03:53.607 20:14:37 -- common/autotest_common.sh@1566 -- # device=0x0010 00:03:53.607 20:14:37 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:53.607 20:14:37 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:53.607 20:14:37 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:03:53.607 20:14:37 -- common/autotest_common.sh@1566 -- # device=0x0010 00:03:53.607 20:14:37 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:53.607 20:14:37 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:53.607 20:14:37 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:03:53.607 20:14:37 -- common/autotest_common.sh@1566 -- # device=0x0010 00:03:53.607 20:14:37 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:53.607 20:14:37 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:53.607 20:14:37 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:03:53.607 20:14:37 -- common/autotest_common.sh@1566 -- # device=0x0010 00:03:53.608 20:14:37 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:53.608 20:14:37 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:03:53.608 20:14:37 -- common/autotest_common.sh@1572 -- # return 0 00:03:53.608 20:14:37 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:03:53.608 20:14:37 -- common/autotest_common.sh@1580 -- # return 0 00:03:53.608 20:14:37 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:53.608 20:14:37 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:53.608 20:14:37 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:53.608 20:14:37 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:53.608 20:14:37 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:53.608 20:14:37 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:53.608 20:14:37 -- common/autotest_common.sh@10 -- # set +x 00:03:53.608 20:14:37 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:53.608 20:14:37 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:03:53.608 20:14:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:53.608 20:14:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:53.608 20:14:37 -- common/autotest_common.sh@10 -- # set +x 00:03:53.608 ************************************ 00:03:53.608 START TEST env 00:03:53.608 ************************************ 00:03:53.608 20:14:37 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:03:53.608 * Looking for test storage... 00:03:53.608 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:03:53.608 20:14:37 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:53.608 20:14:37 env -- common/autotest_common.sh@1711 -- # lcov --version 00:03:53.608 20:14:37 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:53.608 20:14:37 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:53.608 20:14:37 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:53.608 20:14:37 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:53.608 20:14:37 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:53.608 20:14:37 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:53.608 20:14:37 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:53.608 20:14:37 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:53.608 20:14:37 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:53.608 20:14:37 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:53.869 20:14:37 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:53.869 20:14:37 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:53.869 20:14:37 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:53.870 20:14:37 env -- scripts/common.sh@344 -- # case "$op" in 00:03:53.870 20:14:37 env -- scripts/common.sh@345 -- # : 1 00:03:53.870 20:14:37 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:53.870 20:14:37 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:53.870 20:14:37 env -- scripts/common.sh@365 -- # decimal 1 00:03:53.870 20:14:37 env -- scripts/common.sh@353 -- # local d=1 00:03:53.870 20:14:37 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:53.870 20:14:37 env -- scripts/common.sh@355 -- # echo 1 00:03:53.870 20:14:37 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:53.870 20:14:37 env -- scripts/common.sh@366 -- # decimal 2 00:03:53.870 20:14:37 env -- scripts/common.sh@353 -- # local d=2 00:03:53.870 20:14:37 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:53.870 20:14:37 env -- scripts/common.sh@355 -- # echo 2 00:03:53.870 20:14:37 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:53.870 20:14:37 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:53.870 20:14:37 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:53.870 20:14:37 env -- scripts/common.sh@368 -- # return 0 00:03:53.870 20:14:37 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:53.870 20:14:37 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:53.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:53.870 --rc genhtml_branch_coverage=1 00:03:53.870 --rc genhtml_function_coverage=1 00:03:53.870 --rc genhtml_legend=1 00:03:53.870 --rc geninfo_all_blocks=1 00:03:53.870 --rc geninfo_unexecuted_blocks=1 00:03:53.870 00:03:53.870 ' 00:03:53.870 20:14:37 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:53.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:53.870 --rc genhtml_branch_coverage=1 00:03:53.870 --rc genhtml_function_coverage=1 00:03:53.870 --rc genhtml_legend=1 00:03:53.870 --rc geninfo_all_blocks=1 00:03:53.870 --rc geninfo_unexecuted_blocks=1 00:03:53.870 00:03:53.870 ' 00:03:53.870 20:14:37 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:53.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:53.870 --rc genhtml_branch_coverage=1 00:03:53.870 --rc genhtml_function_coverage=1 00:03:53.870 --rc genhtml_legend=1 00:03:53.870 --rc geninfo_all_blocks=1 00:03:53.870 --rc geninfo_unexecuted_blocks=1 00:03:53.870 00:03:53.870 ' 00:03:53.870 20:14:37 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:53.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:53.870 --rc genhtml_branch_coverage=1 00:03:53.870 --rc genhtml_function_coverage=1 00:03:53.870 --rc genhtml_legend=1 00:03:53.870 --rc geninfo_all_blocks=1 00:03:53.870 --rc geninfo_unexecuted_blocks=1 00:03:53.870 00:03:53.870 ' 00:03:53.870 20:14:37 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:03:53.870 20:14:37 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:53.870 20:14:37 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:53.870 20:14:37 env -- common/autotest_common.sh@10 -- # set +x 00:03:53.870 ************************************ 00:03:53.870 START TEST env_memory 00:03:53.870 ************************************ 00:03:53.870 20:14:37 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:03:53.870 00:03:53.870 00:03:53.870 CUnit - A unit testing framework for C - Version 2.1-3 00:03:53.870 http://cunit.sourceforge.net/ 00:03:53.870 00:03:53.870 00:03:53.870 Suite: memory 00:03:53.870 Test: alloc and free memory map ...[2024-12-12 20:14:37.923769] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:53.870 passed 00:03:53.870 Test: mem map translation ...[2024-12-12 20:14:37.963332] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:53.870 [2024-12-12 20:14:37.963398] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:53.870 [2024-12-12 20:14:37.963489] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:53.870 [2024-12-12 20:14:37.963513] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:53.870 passed 00:03:53.870 Test: mem map registration ...[2024-12-12 20:14:38.031927] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:53.870 [2024-12-12 20:14:38.031984] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:53.870 passed 00:03:54.131 Test: mem map adjacent registrations ...passed 00:03:54.131 00:03:54.131 Run Summary: Type Total Ran Passed Failed Inactive 00:03:54.131 suites 1 1 n/a 0 0 00:03:54.131 tests 4 4 4 0 0 00:03:54.131 asserts 152 152 152 0 n/a 00:03:54.131 00:03:54.131 Elapsed time = 0.234 seconds 00:03:54.131 00:03:54.131 real 0m0.274s 00:03:54.131 user 0m0.242s 00:03:54.131 sys 0m0.023s 00:03:54.131 20:14:38 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:54.131 20:14:38 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:54.132 ************************************ 00:03:54.132 END TEST env_memory 00:03:54.132 ************************************ 00:03:54.132 20:14:38 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:03:54.132 20:14:38 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:54.132 20:14:38 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:54.132 20:14:38 env -- common/autotest_common.sh@10 -- # set +x 00:03:54.132 ************************************ 00:03:54.132 START TEST env_vtophys 00:03:54.132 ************************************ 00:03:54.132 20:14:38 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:03:54.132 EAL: lib.eal log level changed from notice to debug 00:03:54.132 EAL: Detected lcore 0 as core 0 on socket 0 00:03:54.132 EAL: Detected lcore 1 as core 0 on socket 0 00:03:54.132 EAL: Detected lcore 2 as core 0 on socket 0 00:03:54.132 EAL: Detected lcore 3 as core 0 on socket 0 00:03:54.132 EAL: Detected lcore 4 as core 0 on socket 0 00:03:54.132 EAL: Detected lcore 5 as core 0 on socket 0 00:03:54.132 EAL: Detected lcore 6 as core 0 on socket 0 00:03:54.132 EAL: Detected lcore 7 as core 0 on socket 0 00:03:54.132 EAL: Detected lcore 8 as core 0 on socket 0 00:03:54.132 EAL: Detected lcore 9 as core 0 on socket 0 00:03:54.132 EAL: Maximum logical cores by configuration: 128 00:03:54.132 EAL: Detected CPU lcores: 10 00:03:54.132 EAL: Detected NUMA nodes: 1 00:03:54.132 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:54.132 EAL: Detected shared linkage of DPDK 00:03:54.132 EAL: No shared files mode enabled, IPC will be disabled 00:03:54.132 EAL: Selected IOVA mode 'PA' 00:03:54.132 EAL: Probing VFIO support... 00:03:54.132 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:03:54.132 EAL: VFIO modules not loaded, skipping VFIO support... 00:03:54.132 EAL: Ask a virtual area of 0x2e000 bytes 00:03:54.132 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:54.132 EAL: Setting up physically contiguous memory... 00:03:54.132 EAL: Setting maximum number of open files to 524288 00:03:54.132 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:54.132 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:54.132 EAL: Ask a virtual area of 0x61000 bytes 00:03:54.132 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:54.132 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:54.132 EAL: Ask a virtual area of 0x400000000 bytes 00:03:54.132 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:54.132 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:54.132 EAL: Ask a virtual area of 0x61000 bytes 00:03:54.132 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:54.132 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:54.132 EAL: Ask a virtual area of 0x400000000 bytes 00:03:54.132 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:54.132 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:54.132 EAL: Ask a virtual area of 0x61000 bytes 00:03:54.132 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:54.132 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:54.132 EAL: Ask a virtual area of 0x400000000 bytes 00:03:54.132 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:54.132 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:54.132 EAL: Ask a virtual area of 0x61000 bytes 00:03:54.132 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:54.132 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:54.132 EAL: Ask a virtual area of 0x400000000 bytes 00:03:54.132 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:54.132 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:54.132 EAL: Hugepages will be freed exactly as allocated. 00:03:54.132 EAL: No shared files mode enabled, IPC is disabled 00:03:54.132 EAL: No shared files mode enabled, IPC is disabled 00:03:54.393 EAL: TSC frequency is ~2600000 KHz 00:03:54.393 EAL: Main lcore 0 is ready (tid=7f1660ed4a40;cpuset=[0]) 00:03:54.393 EAL: Trying to obtain current memory policy. 00:03:54.393 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:54.393 EAL: Restoring previous memory policy: 0 00:03:54.393 EAL: request: mp_malloc_sync 00:03:54.393 EAL: No shared files mode enabled, IPC is disabled 00:03:54.393 EAL: Heap on socket 0 was expanded by 2MB 00:03:54.393 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:03:54.393 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:54.393 EAL: Mem event callback 'spdk:(nil)' registered 00:03:54.393 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:03:54.393 00:03:54.393 00:03:54.393 CUnit - A unit testing framework for C - Version 2.1-3 00:03:54.393 http://cunit.sourceforge.net/ 00:03:54.393 00:03:54.393 00:03:54.393 Suite: components_suite 00:03:54.655 Test: vtophys_malloc_test ...passed 00:03:54.655 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:54.655 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:54.655 EAL: Restoring previous memory policy: 4 00:03:54.655 EAL: Calling mem event callback 'spdk:(nil)' 00:03:54.655 EAL: request: mp_malloc_sync 00:03:54.655 EAL: No shared files mode enabled, IPC is disabled 00:03:54.655 EAL: Heap on socket 0 was expanded by 4MB 00:03:54.655 EAL: Calling mem event callback 'spdk:(nil)' 00:03:54.655 EAL: request: mp_malloc_sync 00:03:54.655 EAL: No shared files mode enabled, IPC is disabled 00:03:54.655 EAL: Heap on socket 0 was shrunk by 4MB 00:03:54.655 EAL: Trying to obtain current memory policy. 00:03:54.655 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:54.655 EAL: Restoring previous memory policy: 4 00:03:54.655 EAL: Calling mem event callback 'spdk:(nil)' 00:03:54.655 EAL: request: mp_malloc_sync 00:03:54.655 EAL: No shared files mode enabled, IPC is disabled 00:03:54.655 EAL: Heap on socket 0 was expanded by 6MB 00:03:54.655 EAL: Calling mem event callback 'spdk:(nil)' 00:03:54.655 EAL: request: mp_malloc_sync 00:03:54.655 EAL: No shared files mode enabled, IPC is disabled 00:03:54.655 EAL: Heap on socket 0 was shrunk by 6MB 00:03:54.655 EAL: Trying to obtain current memory policy. 00:03:54.655 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:54.655 EAL: Restoring previous memory policy: 4 00:03:54.655 EAL: Calling mem event callback 'spdk:(nil)' 00:03:54.655 EAL: request: mp_malloc_sync 00:03:54.655 EAL: No shared files mode enabled, IPC is disabled 00:03:54.655 EAL: Heap on socket 0 was expanded by 10MB 00:03:54.655 EAL: Calling mem event callback 'spdk:(nil)' 00:03:54.655 EAL: request: mp_malloc_sync 00:03:54.655 EAL: No shared files mode enabled, IPC is disabled 00:03:54.655 EAL: Heap on socket 0 was shrunk by 10MB 00:03:54.655 EAL: Trying to obtain current memory policy. 00:03:54.655 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:54.655 EAL: Restoring previous memory policy: 4 00:03:54.655 EAL: Calling mem event callback 'spdk:(nil)' 00:03:54.655 EAL: request: mp_malloc_sync 00:03:54.655 EAL: No shared files mode enabled, IPC is disabled 00:03:54.655 EAL: Heap on socket 0 was expanded by 18MB 00:03:54.655 EAL: Calling mem event callback 'spdk:(nil)' 00:03:54.655 EAL: request: mp_malloc_sync 00:03:54.655 EAL: No shared files mode enabled, IPC is disabled 00:03:54.655 EAL: Heap on socket 0 was shrunk by 18MB 00:03:54.655 EAL: Trying to obtain current memory policy. 00:03:54.655 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:54.918 EAL: Restoring previous memory policy: 4 00:03:54.918 EAL: Calling mem event callback 'spdk:(nil)' 00:03:54.918 EAL: request: mp_malloc_sync 00:03:54.918 EAL: No shared files mode enabled, IPC is disabled 00:03:54.918 EAL: Heap on socket 0 was expanded by 34MB 00:03:54.918 EAL: Calling mem event callback 'spdk:(nil)' 00:03:54.918 EAL: request: mp_malloc_sync 00:03:54.918 EAL: No shared files mode enabled, IPC is disabled 00:03:54.918 EAL: Heap on socket 0 was shrunk by 34MB 00:03:54.918 EAL: Trying to obtain current memory policy. 00:03:54.918 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:54.918 EAL: Restoring previous memory policy: 4 00:03:54.918 EAL: Calling mem event callback 'spdk:(nil)' 00:03:54.918 EAL: request: mp_malloc_sync 00:03:54.918 EAL: No shared files mode enabled, IPC is disabled 00:03:54.918 EAL: Heap on socket 0 was expanded by 66MB 00:03:54.918 EAL: Calling mem event callback 'spdk:(nil)' 00:03:54.918 EAL: request: mp_malloc_sync 00:03:54.918 EAL: No shared files mode enabled, IPC is disabled 00:03:54.918 EAL: Heap on socket 0 was shrunk by 66MB 00:03:54.918 EAL: Trying to obtain current memory policy. 00:03:54.918 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:55.179 EAL: Restoring previous memory policy: 4 00:03:55.179 EAL: Calling mem event callback 'spdk:(nil)' 00:03:55.179 EAL: request: mp_malloc_sync 00:03:55.179 EAL: No shared files mode enabled, IPC is disabled 00:03:55.179 EAL: Heap on socket 0 was expanded by 130MB 00:03:55.179 EAL: Calling mem event callback 'spdk:(nil)' 00:03:55.179 EAL: request: mp_malloc_sync 00:03:55.179 EAL: No shared files mode enabled, IPC is disabled 00:03:55.179 EAL: Heap on socket 0 was shrunk by 130MB 00:03:55.441 EAL: Trying to obtain current memory policy. 00:03:55.441 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:55.441 EAL: Restoring previous memory policy: 4 00:03:55.441 EAL: Calling mem event callback 'spdk:(nil)' 00:03:55.441 EAL: request: mp_malloc_sync 00:03:55.441 EAL: No shared files mode enabled, IPC is disabled 00:03:55.441 EAL: Heap on socket 0 was expanded by 258MB 00:03:55.702 EAL: Calling mem event callback 'spdk:(nil)' 00:03:55.702 EAL: request: mp_malloc_sync 00:03:55.702 EAL: No shared files mode enabled, IPC is disabled 00:03:55.702 EAL: Heap on socket 0 was shrunk by 258MB 00:03:55.963 EAL: Trying to obtain current memory policy. 00:03:55.963 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:56.224 EAL: Restoring previous memory policy: 4 00:03:56.224 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.224 EAL: request: mp_malloc_sync 00:03:56.224 EAL: No shared files mode enabled, IPC is disabled 00:03:56.224 EAL: Heap on socket 0 was expanded by 514MB 00:03:56.831 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.831 EAL: request: mp_malloc_sync 00:03:56.831 EAL: No shared files mode enabled, IPC is disabled 00:03:56.831 EAL: Heap on socket 0 was shrunk by 514MB 00:03:57.404 EAL: Trying to obtain current memory policy. 00:03:57.404 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:57.664 EAL: Restoring previous memory policy: 4 00:03:57.664 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.664 EAL: request: mp_malloc_sync 00:03:57.664 EAL: No shared files mode enabled, IPC is disabled 00:03:57.664 EAL: Heap on socket 0 was expanded by 1026MB 00:03:59.052 EAL: Calling mem event callback 'spdk:(nil)' 00:03:59.052 EAL: request: mp_malloc_sync 00:03:59.052 EAL: No shared files mode enabled, IPC is disabled 00:03:59.052 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:59.993 passed 00:03:59.993 00:03:59.993 Run Summary: Type Total Ran Passed Failed Inactive 00:03:59.993 suites 1 1 n/a 0 0 00:03:59.993 tests 2 2 2 0 0 00:03:59.993 asserts 5901 5901 5901 0 n/a 00:03:59.993 00:03:59.993 Elapsed time = 5.699 seconds 00:03:59.993 EAL: Calling mem event callback 'spdk:(nil)' 00:03:59.993 EAL: request: mp_malloc_sync 00:03:59.993 EAL: No shared files mode enabled, IPC is disabled 00:03:59.993 EAL: Heap on socket 0 was shrunk by 2MB 00:03:59.993 EAL: No shared files mode enabled, IPC is disabled 00:03:59.993 EAL: No shared files mode enabled, IPC is disabled 00:03:59.993 EAL: No shared files mode enabled, IPC is disabled 00:03:59.993 00:03:59.993 real 0m5.979s 00:03:59.993 user 0m4.889s 00:03:59.993 sys 0m0.930s 00:03:59.993 20:14:44 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:59.993 ************************************ 00:03:59.993 END TEST env_vtophys 00:03:59.993 ************************************ 00:03:59.993 20:14:44 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:00.255 20:14:44 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:00.255 20:14:44 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:00.255 20:14:44 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:00.255 20:14:44 env -- common/autotest_common.sh@10 -- # set +x 00:04:00.255 ************************************ 00:04:00.255 START TEST env_pci 00:04:00.255 ************************************ 00:04:00.255 20:14:44 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:00.255 00:04:00.255 00:04:00.255 CUnit - A unit testing framework for C - Version 2.1-3 00:04:00.255 http://cunit.sourceforge.net/ 00:04:00.255 00:04:00.255 00:04:00.255 Suite: pci 00:04:00.255 Test: pci_hook ...[2024-12-12 20:14:44.282502] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 58831 has claimed it 00:04:00.255 passed 00:04:00.255 00:04:00.255 Run Summary: Type Total Ran Passed Failed Inactive 00:04:00.255 suites 1 1 n/a 0 0 00:04:00.255 tests 1 1 1 0 0 00:04:00.255 asserts 25 25 25 0 n/a 00:04:00.255 00:04:00.255 Elapsed time = 0.006 seconds 00:04:00.255 EAL: Cannot find device (10000:00:01.0) 00:04:00.255 EAL: Failed to attach device on primary process 00:04:00.255 00:04:00.255 real 0m0.069s 00:04:00.255 user 0m0.037s 00:04:00.255 sys 0m0.031s 00:04:00.255 20:14:44 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:00.255 20:14:44 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:00.255 ************************************ 00:04:00.255 END TEST env_pci 00:04:00.255 ************************************ 00:04:00.255 20:14:44 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:00.255 20:14:44 env -- env/env.sh@15 -- # uname 00:04:00.255 20:14:44 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:00.255 20:14:44 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:00.255 20:14:44 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:00.255 20:14:44 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:00.255 20:14:44 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:00.255 20:14:44 env -- common/autotest_common.sh@10 -- # set +x 00:04:00.255 ************************************ 00:04:00.255 START TEST env_dpdk_post_init 00:04:00.255 ************************************ 00:04:00.255 20:14:44 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:00.255 EAL: Detected CPU lcores: 10 00:04:00.255 EAL: Detected NUMA nodes: 1 00:04:00.255 EAL: Detected shared linkage of DPDK 00:04:00.255 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:00.255 EAL: Selected IOVA mode 'PA' 00:04:00.517 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:00.517 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:00.517 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:00.517 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:04:00.517 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:04:00.517 Starting DPDK initialization... 00:04:00.517 Starting SPDK post initialization... 00:04:00.517 SPDK NVMe probe 00:04:00.517 Attaching to 0000:00:10.0 00:04:00.517 Attaching to 0000:00:11.0 00:04:00.517 Attaching to 0000:00:12.0 00:04:00.517 Attaching to 0000:00:13.0 00:04:00.517 Attached to 0000:00:13.0 00:04:00.517 Attached to 0000:00:10.0 00:04:00.517 Attached to 0000:00:11.0 00:04:00.517 Attached to 0000:00:12.0 00:04:00.517 Cleaning up... 00:04:00.517 00:04:00.517 real 0m0.257s 00:04:00.517 user 0m0.092s 00:04:00.517 sys 0m0.066s 00:04:00.517 ************************************ 00:04:00.517 END TEST env_dpdk_post_init 00:04:00.517 ************************************ 00:04:00.517 20:14:44 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:00.517 20:14:44 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:00.517 20:14:44 env -- env/env.sh@26 -- # uname 00:04:00.517 20:14:44 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:00.517 20:14:44 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:00.517 20:14:44 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:00.517 20:14:44 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:00.517 20:14:44 env -- common/autotest_common.sh@10 -- # set +x 00:04:00.517 ************************************ 00:04:00.517 START TEST env_mem_callbacks 00:04:00.517 ************************************ 00:04:00.517 20:14:44 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:00.517 EAL: Detected CPU lcores: 10 00:04:00.517 EAL: Detected NUMA nodes: 1 00:04:00.517 EAL: Detected shared linkage of DPDK 00:04:00.779 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:00.779 EAL: Selected IOVA mode 'PA' 00:04:00.779 00:04:00.779 00:04:00.779 CUnit - A unit testing framework for C - Version 2.1-3 00:04:00.779 http://cunit.sourceforge.net/ 00:04:00.779 00:04:00.779 00:04:00.779 Suite: memory 00:04:00.779 Test: test ... 00:04:00.779 register 0x200000200000 2097152 00:04:00.779 malloc 3145728 00:04:00.779 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:00.779 register 0x200000400000 4194304 00:04:00.779 buf 0x2000004fffc0 len 3145728 PASSED 00:04:00.779 malloc 64 00:04:00.779 buf 0x2000004ffec0 len 64 PASSED 00:04:00.779 malloc 4194304 00:04:00.779 register 0x200000800000 6291456 00:04:00.779 buf 0x2000009fffc0 len 4194304 PASSED 00:04:00.779 free 0x2000004fffc0 3145728 00:04:00.779 free 0x2000004ffec0 64 00:04:00.779 unregister 0x200000400000 4194304 PASSED 00:04:00.779 free 0x2000009fffc0 4194304 00:04:00.779 unregister 0x200000800000 6291456 PASSED 00:04:00.779 malloc 8388608 00:04:00.779 register 0x200000400000 10485760 00:04:00.779 buf 0x2000005fffc0 len 8388608 PASSED 00:04:00.779 free 0x2000005fffc0 8388608 00:04:00.779 unregister 0x200000400000 10485760 PASSED 00:04:00.779 passed 00:04:00.779 00:04:00.779 Run Summary: Type Total Ran Passed Failed Inactive 00:04:00.779 suites 1 1 n/a 0 0 00:04:00.779 tests 1 1 1 0 0 00:04:00.779 asserts 15 15 15 0 n/a 00:04:00.779 00:04:00.779 Elapsed time = 0.053 seconds 00:04:00.779 00:04:00.779 real 0m0.230s 00:04:00.779 user 0m0.070s 00:04:00.779 sys 0m0.056s 00:04:00.779 20:14:44 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:00.779 20:14:44 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:00.779 ************************************ 00:04:00.779 END TEST env_mem_callbacks 00:04:00.779 ************************************ 00:04:00.779 ************************************ 00:04:00.779 END TEST env 00:04:00.779 ************************************ 00:04:00.779 00:04:00.779 real 0m7.310s 00:04:00.779 user 0m5.483s 00:04:00.779 sys 0m1.333s 00:04:00.779 20:14:44 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:00.779 20:14:44 env -- common/autotest_common.sh@10 -- # set +x 00:04:01.049 20:14:45 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:01.049 20:14:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:01.049 20:14:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:01.049 20:14:45 -- common/autotest_common.sh@10 -- # set +x 00:04:01.049 ************************************ 00:04:01.049 START TEST rpc 00:04:01.049 ************************************ 00:04:01.049 20:14:45 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:01.049 * Looking for test storage... 00:04:01.049 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:01.049 20:14:45 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:01.049 20:14:45 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:01.049 20:14:45 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:01.049 20:14:45 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:01.049 20:14:45 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:01.049 20:14:45 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:01.049 20:14:45 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:01.049 20:14:45 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:01.049 20:14:45 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:01.049 20:14:45 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:01.049 20:14:45 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:01.049 20:14:45 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:01.049 20:14:45 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:01.049 20:14:45 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:01.049 20:14:45 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:01.049 20:14:45 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:01.049 20:14:45 rpc -- scripts/common.sh@345 -- # : 1 00:04:01.049 20:14:45 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:01.049 20:14:45 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:01.049 20:14:45 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:01.049 20:14:45 rpc -- scripts/common.sh@353 -- # local d=1 00:04:01.049 20:14:45 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:01.049 20:14:45 rpc -- scripts/common.sh@355 -- # echo 1 00:04:01.049 20:14:45 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:01.049 20:14:45 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:01.049 20:14:45 rpc -- scripts/common.sh@353 -- # local d=2 00:04:01.049 20:14:45 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:01.049 20:14:45 rpc -- scripts/common.sh@355 -- # echo 2 00:04:01.049 20:14:45 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:01.049 20:14:45 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:01.049 20:14:45 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:01.049 20:14:45 rpc -- scripts/common.sh@368 -- # return 0 00:04:01.049 20:14:45 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:01.049 20:14:45 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:01.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.049 --rc genhtml_branch_coverage=1 00:04:01.049 --rc genhtml_function_coverage=1 00:04:01.049 --rc genhtml_legend=1 00:04:01.049 --rc geninfo_all_blocks=1 00:04:01.049 --rc geninfo_unexecuted_blocks=1 00:04:01.049 00:04:01.049 ' 00:04:01.049 20:14:45 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:01.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.049 --rc genhtml_branch_coverage=1 00:04:01.049 --rc genhtml_function_coverage=1 00:04:01.049 --rc genhtml_legend=1 00:04:01.049 --rc geninfo_all_blocks=1 00:04:01.049 --rc geninfo_unexecuted_blocks=1 00:04:01.049 00:04:01.049 ' 00:04:01.049 20:14:45 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:01.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.049 --rc genhtml_branch_coverage=1 00:04:01.049 --rc genhtml_function_coverage=1 00:04:01.049 --rc genhtml_legend=1 00:04:01.049 --rc geninfo_all_blocks=1 00:04:01.049 --rc geninfo_unexecuted_blocks=1 00:04:01.049 00:04:01.049 ' 00:04:01.049 20:14:45 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:01.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.049 --rc genhtml_branch_coverage=1 00:04:01.049 --rc genhtml_function_coverage=1 00:04:01.049 --rc genhtml_legend=1 00:04:01.049 --rc geninfo_all_blocks=1 00:04:01.049 --rc geninfo_unexecuted_blocks=1 00:04:01.049 00:04:01.049 ' 00:04:01.049 20:14:45 rpc -- rpc/rpc.sh@65 -- # spdk_pid=58958 00:04:01.049 20:14:45 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:01.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:01.049 20:14:45 rpc -- rpc/rpc.sh@67 -- # waitforlisten 58958 00:04:01.049 20:14:45 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:01.049 20:14:45 rpc -- common/autotest_common.sh@835 -- # '[' -z 58958 ']' 00:04:01.049 20:14:45 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:01.049 20:14:45 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:01.049 20:14:45 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:01.049 20:14:45 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:01.049 20:14:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:01.311 [2024-12-12 20:14:45.309447] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:04:01.311 [2024-12-12 20:14:45.309856] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58958 ] 00:04:01.311 [2024-12-12 20:14:45.474574] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:01.572 [2024-12-12 20:14:45.622729] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:01.572 [2024-12-12 20:14:45.622793] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58958' to capture a snapshot of events at runtime. 00:04:01.572 [2024-12-12 20:14:45.622809] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:01.572 [2024-12-12 20:14:45.622826] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:01.572 [2024-12-12 20:14:45.622835] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58958 for offline analysis/debug. 00:04:01.572 [2024-12-12 20:14:45.623758] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:02.146 20:14:46 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:02.146 20:14:46 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:02.146 20:14:46 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:02.146 20:14:46 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:02.146 20:14:46 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:02.146 20:14:46 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:02.146 20:14:46 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:02.146 20:14:46 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:02.146 20:14:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:02.146 ************************************ 00:04:02.146 START TEST rpc_integrity 00:04:02.146 ************************************ 00:04:02.146 20:14:46 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:02.146 20:14:46 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:02.146 20:14:46 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:02.146 20:14:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:02.409 20:14:46 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:02.409 20:14:46 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:02.409 20:14:46 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:02.409 20:14:46 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:02.409 20:14:46 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:02.409 20:14:46 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:02.409 20:14:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:02.409 20:14:46 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:02.409 20:14:46 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:02.409 20:14:46 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:02.409 20:14:46 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:02.409 20:14:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:02.409 20:14:46 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:02.409 20:14:46 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:02.409 { 00:04:02.409 "name": "Malloc0", 00:04:02.409 "aliases": [ 00:04:02.409 "ea0af3c4-92d9-404b-8cf7-8f0e5e4a2e72" 00:04:02.409 ], 00:04:02.409 "product_name": "Malloc disk", 00:04:02.409 "block_size": 512, 00:04:02.409 "num_blocks": 16384, 00:04:02.409 "uuid": "ea0af3c4-92d9-404b-8cf7-8f0e5e4a2e72", 00:04:02.409 "assigned_rate_limits": { 00:04:02.409 "rw_ios_per_sec": 0, 00:04:02.409 "rw_mbytes_per_sec": 0, 00:04:02.409 "r_mbytes_per_sec": 0, 00:04:02.409 "w_mbytes_per_sec": 0 00:04:02.409 }, 00:04:02.409 "claimed": false, 00:04:02.409 "zoned": false, 00:04:02.409 "supported_io_types": { 00:04:02.409 "read": true, 00:04:02.409 "write": true, 00:04:02.409 "unmap": true, 00:04:02.409 "flush": true, 00:04:02.409 "reset": true, 00:04:02.409 "nvme_admin": false, 00:04:02.409 "nvme_io": false, 00:04:02.409 "nvme_io_md": false, 00:04:02.409 "write_zeroes": true, 00:04:02.409 "zcopy": true, 00:04:02.409 "get_zone_info": false, 00:04:02.409 "zone_management": false, 00:04:02.409 "zone_append": false, 00:04:02.409 "compare": false, 00:04:02.409 "compare_and_write": false, 00:04:02.409 "abort": true, 00:04:02.409 "seek_hole": false, 00:04:02.409 "seek_data": false, 00:04:02.409 "copy": true, 00:04:02.409 "nvme_iov_md": false 00:04:02.409 }, 00:04:02.409 "memory_domains": [ 00:04:02.409 { 00:04:02.409 "dma_device_id": "system", 00:04:02.409 "dma_device_type": 1 00:04:02.409 }, 00:04:02.409 { 00:04:02.409 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:02.409 "dma_device_type": 2 00:04:02.409 } 00:04:02.409 ], 00:04:02.409 "driver_specific": {} 00:04:02.409 } 00:04:02.409 ]' 00:04:02.409 20:14:46 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:02.409 20:14:46 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:02.409 20:14:46 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:02.409 20:14:46 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:02.409 20:14:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:02.409 [2024-12-12 20:14:46.487781] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:02.409 [2024-12-12 20:14:46.487864] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:02.409 [2024-12-12 20:14:46.487895] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:04:02.409 [2024-12-12 20:14:46.487909] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:02.409 [2024-12-12 20:14:46.490483] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:02.409 [2024-12-12 20:14:46.490542] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:02.409 Passthru0 00:04:02.409 20:14:46 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:02.409 20:14:46 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:02.409 20:14:46 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:02.409 20:14:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:02.409 20:14:46 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:02.409 20:14:46 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:02.409 { 00:04:02.409 "name": "Malloc0", 00:04:02.409 "aliases": [ 00:04:02.409 "ea0af3c4-92d9-404b-8cf7-8f0e5e4a2e72" 00:04:02.409 ], 00:04:02.409 "product_name": "Malloc disk", 00:04:02.409 "block_size": 512, 00:04:02.409 "num_blocks": 16384, 00:04:02.409 "uuid": "ea0af3c4-92d9-404b-8cf7-8f0e5e4a2e72", 00:04:02.409 "assigned_rate_limits": { 00:04:02.409 "rw_ios_per_sec": 0, 00:04:02.409 "rw_mbytes_per_sec": 0, 00:04:02.409 "r_mbytes_per_sec": 0, 00:04:02.409 "w_mbytes_per_sec": 0 00:04:02.409 }, 00:04:02.409 "claimed": true, 00:04:02.409 "claim_type": "exclusive_write", 00:04:02.409 "zoned": false, 00:04:02.409 "supported_io_types": { 00:04:02.409 "read": true, 00:04:02.409 "write": true, 00:04:02.409 "unmap": true, 00:04:02.409 "flush": true, 00:04:02.409 "reset": true, 00:04:02.409 "nvme_admin": false, 00:04:02.409 "nvme_io": false, 00:04:02.409 "nvme_io_md": false, 00:04:02.409 "write_zeroes": true, 00:04:02.409 "zcopy": true, 00:04:02.409 "get_zone_info": false, 00:04:02.409 "zone_management": false, 00:04:02.409 "zone_append": false, 00:04:02.409 "compare": false, 00:04:02.409 "compare_and_write": false, 00:04:02.409 "abort": true, 00:04:02.409 "seek_hole": false, 00:04:02.409 "seek_data": false, 00:04:02.409 "copy": true, 00:04:02.409 "nvme_iov_md": false 00:04:02.409 }, 00:04:02.409 "memory_domains": [ 00:04:02.409 { 00:04:02.409 "dma_device_id": "system", 00:04:02.409 "dma_device_type": 1 00:04:02.409 }, 00:04:02.409 { 00:04:02.409 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:02.409 "dma_device_type": 2 00:04:02.409 } 00:04:02.409 ], 00:04:02.409 "driver_specific": {} 00:04:02.409 }, 00:04:02.409 { 00:04:02.409 "name": "Passthru0", 00:04:02.409 "aliases": [ 00:04:02.409 "ffbd4a76-3f34-5c6f-b3af-713ded190466" 00:04:02.409 ], 00:04:02.409 "product_name": "passthru", 00:04:02.409 "block_size": 512, 00:04:02.409 "num_blocks": 16384, 00:04:02.409 "uuid": "ffbd4a76-3f34-5c6f-b3af-713ded190466", 00:04:02.409 "assigned_rate_limits": { 00:04:02.409 "rw_ios_per_sec": 0, 00:04:02.409 "rw_mbytes_per_sec": 0, 00:04:02.409 "r_mbytes_per_sec": 0, 00:04:02.409 "w_mbytes_per_sec": 0 00:04:02.409 }, 00:04:02.409 "claimed": false, 00:04:02.409 "zoned": false, 00:04:02.409 "supported_io_types": { 00:04:02.409 "read": true, 00:04:02.409 "write": true, 00:04:02.409 "unmap": true, 00:04:02.409 "flush": true, 00:04:02.409 "reset": true, 00:04:02.409 "nvme_admin": false, 00:04:02.409 "nvme_io": false, 00:04:02.409 "nvme_io_md": false, 00:04:02.409 "write_zeroes": true, 00:04:02.409 "zcopy": true, 00:04:02.409 "get_zone_info": false, 00:04:02.409 "zone_management": false, 00:04:02.409 "zone_append": false, 00:04:02.409 "compare": false, 00:04:02.409 "compare_and_write": false, 00:04:02.409 "abort": true, 00:04:02.409 "seek_hole": false, 00:04:02.409 "seek_data": false, 00:04:02.409 "copy": true, 00:04:02.409 "nvme_iov_md": false 00:04:02.409 }, 00:04:02.409 "memory_domains": [ 00:04:02.409 { 00:04:02.409 "dma_device_id": "system", 00:04:02.409 "dma_device_type": 1 00:04:02.409 }, 00:04:02.409 { 00:04:02.409 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:02.409 "dma_device_type": 2 00:04:02.409 } 00:04:02.409 ], 00:04:02.409 "driver_specific": { 00:04:02.409 "passthru": { 00:04:02.409 "name": "Passthru0", 00:04:02.409 "base_bdev_name": "Malloc0" 00:04:02.409 } 00:04:02.409 } 00:04:02.409 } 00:04:02.409 ]' 00:04:02.409 20:14:46 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:02.409 20:14:46 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:02.409 20:14:46 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:02.410 20:14:46 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:02.410 20:14:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:02.410 20:14:46 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:02.410 20:14:46 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:02.410 20:14:46 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:02.410 20:14:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:02.410 20:14:46 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:02.410 20:14:46 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:02.410 20:14:46 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:02.410 20:14:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:02.410 20:14:46 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:02.410 20:14:46 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:02.410 20:14:46 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:02.410 20:14:46 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:02.410 ************************************ 00:04:02.410 END TEST rpc_integrity 00:04:02.410 ************************************ 00:04:02.410 00:04:02.410 real 0m0.263s 00:04:02.410 user 0m0.134s 00:04:02.410 sys 0m0.033s 00:04:02.410 20:14:46 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:02.410 20:14:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:02.672 20:14:46 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:02.672 20:14:46 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:02.672 20:14:46 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:02.672 20:14:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:02.672 ************************************ 00:04:02.672 START TEST rpc_plugins 00:04:02.672 ************************************ 00:04:02.672 20:14:46 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:02.672 20:14:46 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:02.672 20:14:46 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:02.672 20:14:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:02.672 20:14:46 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:02.672 20:14:46 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:02.672 20:14:46 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:02.672 20:14:46 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:02.672 20:14:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:02.672 20:14:46 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:02.672 20:14:46 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:02.672 { 00:04:02.672 "name": "Malloc1", 00:04:02.672 "aliases": [ 00:04:02.672 "6e2fdf4f-64fa-491f-acd0-668a041f8f09" 00:04:02.672 ], 00:04:02.672 "product_name": "Malloc disk", 00:04:02.672 "block_size": 4096, 00:04:02.672 "num_blocks": 256, 00:04:02.672 "uuid": "6e2fdf4f-64fa-491f-acd0-668a041f8f09", 00:04:02.672 "assigned_rate_limits": { 00:04:02.672 "rw_ios_per_sec": 0, 00:04:02.672 "rw_mbytes_per_sec": 0, 00:04:02.672 "r_mbytes_per_sec": 0, 00:04:02.672 "w_mbytes_per_sec": 0 00:04:02.672 }, 00:04:02.672 "claimed": false, 00:04:02.672 "zoned": false, 00:04:02.672 "supported_io_types": { 00:04:02.672 "read": true, 00:04:02.672 "write": true, 00:04:02.672 "unmap": true, 00:04:02.672 "flush": true, 00:04:02.672 "reset": true, 00:04:02.672 "nvme_admin": false, 00:04:02.672 "nvme_io": false, 00:04:02.672 "nvme_io_md": false, 00:04:02.672 "write_zeroes": true, 00:04:02.672 "zcopy": true, 00:04:02.672 "get_zone_info": false, 00:04:02.672 "zone_management": false, 00:04:02.672 "zone_append": false, 00:04:02.672 "compare": false, 00:04:02.672 "compare_and_write": false, 00:04:02.672 "abort": true, 00:04:02.672 "seek_hole": false, 00:04:02.672 "seek_data": false, 00:04:02.672 "copy": true, 00:04:02.672 "nvme_iov_md": false 00:04:02.672 }, 00:04:02.672 "memory_domains": [ 00:04:02.672 { 00:04:02.672 "dma_device_id": "system", 00:04:02.672 "dma_device_type": 1 00:04:02.672 }, 00:04:02.672 { 00:04:02.672 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:02.672 "dma_device_type": 2 00:04:02.672 } 00:04:02.672 ], 00:04:02.672 "driver_specific": {} 00:04:02.672 } 00:04:02.672 ]' 00:04:02.672 20:14:46 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:02.672 20:14:46 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:02.672 20:14:46 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:02.672 20:14:46 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:02.672 20:14:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:02.672 20:14:46 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:02.672 20:14:46 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:02.672 20:14:46 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:02.672 20:14:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:02.672 20:14:46 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:02.672 20:14:46 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:02.672 20:14:46 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:02.672 ************************************ 00:04:02.672 END TEST rpc_plugins 00:04:02.672 ************************************ 00:04:02.672 20:14:46 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:02.672 00:04:02.672 real 0m0.128s 00:04:02.672 user 0m0.073s 00:04:02.672 sys 0m0.012s 00:04:02.672 20:14:46 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:02.672 20:14:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:02.672 20:14:46 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:02.672 20:14:46 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:02.672 20:14:46 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:02.673 20:14:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:02.673 ************************************ 00:04:02.673 START TEST rpc_trace_cmd_test 00:04:02.673 ************************************ 00:04:02.673 20:14:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:02.673 20:14:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:02.673 20:14:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:02.673 20:14:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:02.673 20:14:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:02.934 20:14:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:02.934 20:14:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:02.934 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58958", 00:04:02.934 "tpoint_group_mask": "0x8", 00:04:02.934 "iscsi_conn": { 00:04:02.934 "mask": "0x2", 00:04:02.934 "tpoint_mask": "0x0" 00:04:02.934 }, 00:04:02.934 "scsi": { 00:04:02.934 "mask": "0x4", 00:04:02.934 "tpoint_mask": "0x0" 00:04:02.934 }, 00:04:02.934 "bdev": { 00:04:02.934 "mask": "0x8", 00:04:02.934 "tpoint_mask": "0xffffffffffffffff" 00:04:02.934 }, 00:04:02.934 "nvmf_rdma": { 00:04:02.934 "mask": "0x10", 00:04:02.934 "tpoint_mask": "0x0" 00:04:02.934 }, 00:04:02.934 "nvmf_tcp": { 00:04:02.934 "mask": "0x20", 00:04:02.934 "tpoint_mask": "0x0" 00:04:02.934 }, 00:04:02.934 "ftl": { 00:04:02.934 "mask": "0x40", 00:04:02.934 "tpoint_mask": "0x0" 00:04:02.934 }, 00:04:02.934 "blobfs": { 00:04:02.934 "mask": "0x80", 00:04:02.934 "tpoint_mask": "0x0" 00:04:02.934 }, 00:04:02.934 "dsa": { 00:04:02.934 "mask": "0x200", 00:04:02.934 "tpoint_mask": "0x0" 00:04:02.934 }, 00:04:02.934 "thread": { 00:04:02.934 "mask": "0x400", 00:04:02.934 "tpoint_mask": "0x0" 00:04:02.934 }, 00:04:02.934 "nvme_pcie": { 00:04:02.934 "mask": "0x800", 00:04:02.934 "tpoint_mask": "0x0" 00:04:02.934 }, 00:04:02.934 "iaa": { 00:04:02.934 "mask": "0x1000", 00:04:02.934 "tpoint_mask": "0x0" 00:04:02.934 }, 00:04:02.934 "nvme_tcp": { 00:04:02.934 "mask": "0x2000", 00:04:02.934 "tpoint_mask": "0x0" 00:04:02.934 }, 00:04:02.934 "bdev_nvme": { 00:04:02.934 "mask": "0x4000", 00:04:02.934 "tpoint_mask": "0x0" 00:04:02.934 }, 00:04:02.934 "sock": { 00:04:02.934 "mask": "0x8000", 00:04:02.934 "tpoint_mask": "0x0" 00:04:02.934 }, 00:04:02.934 "blob": { 00:04:02.934 "mask": "0x10000", 00:04:02.934 "tpoint_mask": "0x0" 00:04:02.934 }, 00:04:02.934 "bdev_raid": { 00:04:02.934 "mask": "0x20000", 00:04:02.934 "tpoint_mask": "0x0" 00:04:02.934 }, 00:04:02.934 "scheduler": { 00:04:02.934 "mask": "0x40000", 00:04:02.934 "tpoint_mask": "0x0" 00:04:02.934 } 00:04:02.934 }' 00:04:02.934 20:14:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:02.934 20:14:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:02.934 20:14:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:02.934 20:14:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:02.934 20:14:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:02.934 20:14:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:02.934 20:14:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:02.934 20:14:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:02.934 20:14:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:02.934 ************************************ 00:04:02.934 END TEST rpc_trace_cmd_test 00:04:02.934 ************************************ 00:04:02.934 20:14:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:02.934 00:04:02.934 real 0m0.185s 00:04:02.934 user 0m0.149s 00:04:02.934 sys 0m0.025s 00:04:02.934 20:14:47 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:02.934 20:14:47 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:02.934 20:14:47 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:02.934 20:14:47 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:02.934 20:14:47 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:02.934 20:14:47 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:02.934 20:14:47 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:02.935 20:14:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:02.935 ************************************ 00:04:02.935 START TEST rpc_daemon_integrity 00:04:02.935 ************************************ 00:04:02.935 20:14:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:02.935 20:14:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:02.935 20:14:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:02.935 20:14:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:02.935 20:14:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:02.935 20:14:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:02.935 20:14:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:03.195 20:14:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:03.195 20:14:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:03.195 20:14:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:03.195 20:14:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.195 20:14:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:03.195 20:14:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:03.195 20:14:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:03.195 20:14:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:03.195 20:14:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.195 20:14:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:03.195 20:14:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:03.195 { 00:04:03.195 "name": "Malloc2", 00:04:03.195 "aliases": [ 00:04:03.195 "68ee1bdd-bbca-4f6d-8161-41e9e907269b" 00:04:03.195 ], 00:04:03.195 "product_name": "Malloc disk", 00:04:03.195 "block_size": 512, 00:04:03.195 "num_blocks": 16384, 00:04:03.195 "uuid": "68ee1bdd-bbca-4f6d-8161-41e9e907269b", 00:04:03.195 "assigned_rate_limits": { 00:04:03.195 "rw_ios_per_sec": 0, 00:04:03.195 "rw_mbytes_per_sec": 0, 00:04:03.195 "r_mbytes_per_sec": 0, 00:04:03.195 "w_mbytes_per_sec": 0 00:04:03.196 }, 00:04:03.196 "claimed": false, 00:04:03.196 "zoned": false, 00:04:03.196 "supported_io_types": { 00:04:03.196 "read": true, 00:04:03.196 "write": true, 00:04:03.196 "unmap": true, 00:04:03.196 "flush": true, 00:04:03.196 "reset": true, 00:04:03.196 "nvme_admin": false, 00:04:03.196 "nvme_io": false, 00:04:03.196 "nvme_io_md": false, 00:04:03.196 "write_zeroes": true, 00:04:03.196 "zcopy": true, 00:04:03.196 "get_zone_info": false, 00:04:03.196 "zone_management": false, 00:04:03.196 "zone_append": false, 00:04:03.196 "compare": false, 00:04:03.196 "compare_and_write": false, 00:04:03.196 "abort": true, 00:04:03.196 "seek_hole": false, 00:04:03.196 "seek_data": false, 00:04:03.196 "copy": true, 00:04:03.196 "nvme_iov_md": false 00:04:03.196 }, 00:04:03.196 "memory_domains": [ 00:04:03.196 { 00:04:03.196 "dma_device_id": "system", 00:04:03.196 "dma_device_type": 1 00:04:03.196 }, 00:04:03.196 { 00:04:03.196 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:03.196 "dma_device_type": 2 00:04:03.196 } 00:04:03.196 ], 00:04:03.196 "driver_specific": {} 00:04:03.196 } 00:04:03.196 ]' 00:04:03.196 20:14:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:03.196 20:14:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:03.196 20:14:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:03.196 20:14:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:03.196 20:14:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.196 [2024-12-12 20:14:47.264019] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:03.196 [2024-12-12 20:14:47.264099] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:03.196 [2024-12-12 20:14:47.264123] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:04:03.196 [2024-12-12 20:14:47.264136] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:03.196 [2024-12-12 20:14:47.266754] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:03.196 [2024-12-12 20:14:47.266815] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:03.196 Passthru0 00:04:03.196 20:14:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:03.196 20:14:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:03.196 20:14:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:03.196 20:14:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.196 20:14:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:03.196 20:14:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:03.196 { 00:04:03.196 "name": "Malloc2", 00:04:03.196 "aliases": [ 00:04:03.196 "68ee1bdd-bbca-4f6d-8161-41e9e907269b" 00:04:03.196 ], 00:04:03.196 "product_name": "Malloc disk", 00:04:03.196 "block_size": 512, 00:04:03.196 "num_blocks": 16384, 00:04:03.196 "uuid": "68ee1bdd-bbca-4f6d-8161-41e9e907269b", 00:04:03.196 "assigned_rate_limits": { 00:04:03.196 "rw_ios_per_sec": 0, 00:04:03.196 "rw_mbytes_per_sec": 0, 00:04:03.196 "r_mbytes_per_sec": 0, 00:04:03.196 "w_mbytes_per_sec": 0 00:04:03.196 }, 00:04:03.196 "claimed": true, 00:04:03.196 "claim_type": "exclusive_write", 00:04:03.196 "zoned": false, 00:04:03.196 "supported_io_types": { 00:04:03.196 "read": true, 00:04:03.196 "write": true, 00:04:03.196 "unmap": true, 00:04:03.196 "flush": true, 00:04:03.196 "reset": true, 00:04:03.196 "nvme_admin": false, 00:04:03.196 "nvme_io": false, 00:04:03.196 "nvme_io_md": false, 00:04:03.196 "write_zeroes": true, 00:04:03.196 "zcopy": true, 00:04:03.196 "get_zone_info": false, 00:04:03.196 "zone_management": false, 00:04:03.196 "zone_append": false, 00:04:03.196 "compare": false, 00:04:03.196 "compare_and_write": false, 00:04:03.196 "abort": true, 00:04:03.196 "seek_hole": false, 00:04:03.196 "seek_data": false, 00:04:03.196 "copy": true, 00:04:03.196 "nvme_iov_md": false 00:04:03.196 }, 00:04:03.196 "memory_domains": [ 00:04:03.196 { 00:04:03.196 "dma_device_id": "system", 00:04:03.196 "dma_device_type": 1 00:04:03.196 }, 00:04:03.196 { 00:04:03.196 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:03.196 "dma_device_type": 2 00:04:03.196 } 00:04:03.196 ], 00:04:03.196 "driver_specific": {} 00:04:03.196 }, 00:04:03.196 { 00:04:03.196 "name": "Passthru0", 00:04:03.196 "aliases": [ 00:04:03.196 "7c9af791-338c-5cd1-86b5-4258ec50870f" 00:04:03.196 ], 00:04:03.196 "product_name": "passthru", 00:04:03.196 "block_size": 512, 00:04:03.196 "num_blocks": 16384, 00:04:03.196 "uuid": "7c9af791-338c-5cd1-86b5-4258ec50870f", 00:04:03.196 "assigned_rate_limits": { 00:04:03.196 "rw_ios_per_sec": 0, 00:04:03.196 "rw_mbytes_per_sec": 0, 00:04:03.196 "r_mbytes_per_sec": 0, 00:04:03.196 "w_mbytes_per_sec": 0 00:04:03.196 }, 00:04:03.196 "claimed": false, 00:04:03.196 "zoned": false, 00:04:03.196 "supported_io_types": { 00:04:03.196 "read": true, 00:04:03.196 "write": true, 00:04:03.196 "unmap": true, 00:04:03.196 "flush": true, 00:04:03.196 "reset": true, 00:04:03.196 "nvme_admin": false, 00:04:03.196 "nvme_io": false, 00:04:03.196 "nvme_io_md": false, 00:04:03.196 "write_zeroes": true, 00:04:03.196 "zcopy": true, 00:04:03.196 "get_zone_info": false, 00:04:03.196 "zone_management": false, 00:04:03.196 "zone_append": false, 00:04:03.196 "compare": false, 00:04:03.196 "compare_and_write": false, 00:04:03.196 "abort": true, 00:04:03.196 "seek_hole": false, 00:04:03.196 "seek_data": false, 00:04:03.196 "copy": true, 00:04:03.196 "nvme_iov_md": false 00:04:03.196 }, 00:04:03.196 "memory_domains": [ 00:04:03.196 { 00:04:03.196 "dma_device_id": "system", 00:04:03.196 "dma_device_type": 1 00:04:03.196 }, 00:04:03.196 { 00:04:03.196 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:03.196 "dma_device_type": 2 00:04:03.196 } 00:04:03.196 ], 00:04:03.196 "driver_specific": { 00:04:03.196 "passthru": { 00:04:03.196 "name": "Passthru0", 00:04:03.196 "base_bdev_name": "Malloc2" 00:04:03.196 } 00:04:03.196 } 00:04:03.196 } 00:04:03.196 ]' 00:04:03.196 20:14:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:03.196 20:14:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:03.196 20:14:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:03.196 20:14:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:03.196 20:14:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.196 20:14:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:03.196 20:14:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:03.196 20:14:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:03.196 20:14:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.196 20:14:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:03.196 20:14:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:03.196 20:14:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:03.196 20:14:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.196 20:14:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:03.196 20:14:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:03.196 20:14:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:03.196 ************************************ 00:04:03.196 END TEST rpc_daemon_integrity 00:04:03.196 ************************************ 00:04:03.196 20:14:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:03.196 00:04:03.196 real 0m0.255s 00:04:03.196 user 0m0.128s 00:04:03.196 sys 0m0.036s 00:04:03.196 20:14:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:03.196 20:14:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.457 20:14:47 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:03.457 20:14:47 rpc -- rpc/rpc.sh@84 -- # killprocess 58958 00:04:03.457 20:14:47 rpc -- common/autotest_common.sh@954 -- # '[' -z 58958 ']' 00:04:03.457 20:14:47 rpc -- common/autotest_common.sh@958 -- # kill -0 58958 00:04:03.457 20:14:47 rpc -- common/autotest_common.sh@959 -- # uname 00:04:03.457 20:14:47 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:03.457 20:14:47 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58958 00:04:03.457 killing process with pid 58958 00:04:03.457 20:14:47 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:03.457 20:14:47 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:03.457 20:14:47 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58958' 00:04:03.457 20:14:47 rpc -- common/autotest_common.sh@973 -- # kill 58958 00:04:03.457 20:14:47 rpc -- common/autotest_common.sh@978 -- # wait 58958 00:04:05.381 00:04:05.381 real 0m4.110s 00:04:05.381 user 0m4.422s 00:04:05.381 sys 0m0.751s 00:04:05.381 20:14:49 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:05.381 20:14:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:05.381 ************************************ 00:04:05.381 END TEST rpc 00:04:05.381 ************************************ 00:04:05.381 20:14:49 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:05.381 20:14:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:05.381 20:14:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:05.381 20:14:49 -- common/autotest_common.sh@10 -- # set +x 00:04:05.381 ************************************ 00:04:05.381 START TEST skip_rpc 00:04:05.381 ************************************ 00:04:05.381 20:14:49 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:05.381 * Looking for test storage... 00:04:05.381 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:05.381 20:14:49 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:05.381 20:14:49 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:05.381 20:14:49 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:05.381 20:14:49 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:05.381 20:14:49 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:05.381 20:14:49 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:05.381 20:14:49 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:05.381 20:14:49 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:05.381 20:14:49 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:05.381 20:14:49 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:05.381 20:14:49 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:05.381 20:14:49 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:05.381 20:14:49 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:05.381 20:14:49 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:05.381 20:14:49 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:05.381 20:14:49 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:05.381 20:14:49 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:05.381 20:14:49 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:05.381 20:14:49 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:05.381 20:14:49 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:05.381 20:14:49 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:05.381 20:14:49 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:05.381 20:14:49 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:05.381 20:14:49 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:05.381 20:14:49 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:05.381 20:14:49 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:05.381 20:14:49 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:05.381 20:14:49 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:05.381 20:14:49 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:05.381 20:14:49 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:05.381 20:14:49 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:05.381 20:14:49 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:05.381 20:14:49 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:05.381 20:14:49 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:05.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.381 --rc genhtml_branch_coverage=1 00:04:05.381 --rc genhtml_function_coverage=1 00:04:05.381 --rc genhtml_legend=1 00:04:05.381 --rc geninfo_all_blocks=1 00:04:05.381 --rc geninfo_unexecuted_blocks=1 00:04:05.381 00:04:05.381 ' 00:04:05.381 20:14:49 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:05.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.381 --rc genhtml_branch_coverage=1 00:04:05.381 --rc genhtml_function_coverage=1 00:04:05.381 --rc genhtml_legend=1 00:04:05.381 --rc geninfo_all_blocks=1 00:04:05.381 --rc geninfo_unexecuted_blocks=1 00:04:05.381 00:04:05.381 ' 00:04:05.381 20:14:49 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:05.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.381 --rc genhtml_branch_coverage=1 00:04:05.381 --rc genhtml_function_coverage=1 00:04:05.381 --rc genhtml_legend=1 00:04:05.381 --rc geninfo_all_blocks=1 00:04:05.381 --rc geninfo_unexecuted_blocks=1 00:04:05.381 00:04:05.381 ' 00:04:05.381 20:14:49 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:05.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.381 --rc genhtml_branch_coverage=1 00:04:05.381 --rc genhtml_function_coverage=1 00:04:05.381 --rc genhtml_legend=1 00:04:05.381 --rc geninfo_all_blocks=1 00:04:05.381 --rc geninfo_unexecuted_blocks=1 00:04:05.381 00:04:05.381 ' 00:04:05.381 20:14:49 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:05.381 20:14:49 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:05.381 20:14:49 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:05.381 20:14:49 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:05.381 20:14:49 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:05.381 20:14:49 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:05.381 ************************************ 00:04:05.381 START TEST skip_rpc 00:04:05.381 ************************************ 00:04:05.381 20:14:49 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:05.381 20:14:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=59176 00:04:05.381 20:14:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:05.381 20:14:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:05.381 20:14:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:05.381 [2024-12-12 20:14:49.480955] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:04:05.381 [2024-12-12 20:14:49.481112] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59176 ] 00:04:05.642 [2024-12-12 20:14:49.647825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:05.642 [2024-12-12 20:14:49.785056] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:10.930 20:14:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:10.930 20:14:54 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:10.930 20:14:54 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:10.930 20:14:54 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:10.930 20:14:54 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:10.930 20:14:54 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:10.930 20:14:54 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:10.930 20:14:54 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:10.930 20:14:54 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:10.930 20:14:54 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:10.930 20:14:54 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:10.930 20:14:54 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:10.930 20:14:54 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:10.930 20:14:54 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:10.930 20:14:54 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:10.930 20:14:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:10.930 20:14:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 59176 00:04:10.930 20:14:54 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 59176 ']' 00:04:10.930 20:14:54 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 59176 00:04:10.930 20:14:54 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:10.930 20:14:54 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:10.930 20:14:54 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59176 00:04:10.930 20:14:54 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:10.930 20:14:54 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:10.930 killing process with pid 59176 00:04:10.930 20:14:54 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59176' 00:04:10.930 20:14:54 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 59176 00:04:10.930 20:14:54 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 59176 00:04:11.501 00:04:11.501 real 0m6.207s 00:04:11.501 user 0m5.704s 00:04:11.501 sys 0m0.389s 00:04:11.501 20:14:55 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:11.501 ************************************ 00:04:11.501 END TEST skip_rpc 00:04:11.501 ************************************ 00:04:11.501 20:14:55 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:11.501 20:14:55 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:11.501 20:14:55 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:11.501 20:14:55 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:11.501 20:14:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:11.501 ************************************ 00:04:11.501 START TEST skip_rpc_with_json 00:04:11.501 ************************************ 00:04:11.501 20:14:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:11.501 20:14:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:11.501 20:14:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=59269 00:04:11.501 20:14:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:11.501 20:14:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 59269 00:04:11.501 20:14:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:11.501 20:14:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 59269 ']' 00:04:11.501 20:14:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:11.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:11.501 20:14:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:11.501 20:14:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:11.501 20:14:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:11.501 20:14:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:11.762 [2024-12-12 20:14:55.738785] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:04:11.762 [2024-12-12 20:14:55.738926] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59269 ] 00:04:11.762 [2024-12-12 20:14:55.900811] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:12.022 [2024-12-12 20:14:56.024940] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:12.594 20:14:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:12.594 20:14:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:12.594 20:14:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:12.594 20:14:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:12.594 20:14:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:12.594 [2024-12-12 20:14:56.707239] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:12.594 request: 00:04:12.594 { 00:04:12.594 "trtype": "tcp", 00:04:12.594 "method": "nvmf_get_transports", 00:04:12.594 "req_id": 1 00:04:12.594 } 00:04:12.594 Got JSON-RPC error response 00:04:12.594 response: 00:04:12.594 { 00:04:12.594 "code": -19, 00:04:12.594 "message": "No such device" 00:04:12.594 } 00:04:12.594 20:14:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:12.594 20:14:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:12.594 20:14:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:12.594 20:14:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:12.594 [2024-12-12 20:14:56.719375] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:12.594 20:14:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:12.594 20:14:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:12.594 20:14:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:12.594 20:14:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:12.855 20:14:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:12.855 20:14:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:12.855 { 00:04:12.855 "subsystems": [ 00:04:12.855 { 00:04:12.855 "subsystem": "fsdev", 00:04:12.855 "config": [ 00:04:12.855 { 00:04:12.855 "method": "fsdev_set_opts", 00:04:12.855 "params": { 00:04:12.855 "fsdev_io_pool_size": 65535, 00:04:12.855 "fsdev_io_cache_size": 256 00:04:12.855 } 00:04:12.855 } 00:04:12.855 ] 00:04:12.855 }, 00:04:12.855 { 00:04:12.855 "subsystem": "keyring", 00:04:12.855 "config": [] 00:04:12.855 }, 00:04:12.855 { 00:04:12.855 "subsystem": "iobuf", 00:04:12.855 "config": [ 00:04:12.855 { 00:04:12.855 "method": "iobuf_set_options", 00:04:12.855 "params": { 00:04:12.855 "small_pool_count": 8192, 00:04:12.855 "large_pool_count": 1024, 00:04:12.855 "small_bufsize": 8192, 00:04:12.855 "large_bufsize": 135168, 00:04:12.855 "enable_numa": false 00:04:12.855 } 00:04:12.855 } 00:04:12.855 ] 00:04:12.855 }, 00:04:12.855 { 00:04:12.855 "subsystem": "sock", 00:04:12.855 "config": [ 00:04:12.855 { 00:04:12.855 "method": "sock_set_default_impl", 00:04:12.855 "params": { 00:04:12.855 "impl_name": "posix" 00:04:12.855 } 00:04:12.855 }, 00:04:12.855 { 00:04:12.855 "method": "sock_impl_set_options", 00:04:12.855 "params": { 00:04:12.855 "impl_name": "ssl", 00:04:12.855 "recv_buf_size": 4096, 00:04:12.855 "send_buf_size": 4096, 00:04:12.855 "enable_recv_pipe": true, 00:04:12.855 "enable_quickack": false, 00:04:12.855 "enable_placement_id": 0, 00:04:12.855 "enable_zerocopy_send_server": true, 00:04:12.855 "enable_zerocopy_send_client": false, 00:04:12.855 "zerocopy_threshold": 0, 00:04:12.855 "tls_version": 0, 00:04:12.855 "enable_ktls": false 00:04:12.855 } 00:04:12.855 }, 00:04:12.855 { 00:04:12.855 "method": "sock_impl_set_options", 00:04:12.855 "params": { 00:04:12.855 "impl_name": "posix", 00:04:12.855 "recv_buf_size": 2097152, 00:04:12.855 "send_buf_size": 2097152, 00:04:12.855 "enable_recv_pipe": true, 00:04:12.855 "enable_quickack": false, 00:04:12.855 "enable_placement_id": 0, 00:04:12.855 "enable_zerocopy_send_server": true, 00:04:12.855 "enable_zerocopy_send_client": false, 00:04:12.855 "zerocopy_threshold": 0, 00:04:12.855 "tls_version": 0, 00:04:12.855 "enable_ktls": false 00:04:12.855 } 00:04:12.855 } 00:04:12.855 ] 00:04:12.855 }, 00:04:12.855 { 00:04:12.855 "subsystem": "vmd", 00:04:12.855 "config": [] 00:04:12.855 }, 00:04:12.855 { 00:04:12.855 "subsystem": "accel", 00:04:12.855 "config": [ 00:04:12.855 { 00:04:12.855 "method": "accel_set_options", 00:04:12.855 "params": { 00:04:12.855 "small_cache_size": 128, 00:04:12.855 "large_cache_size": 16, 00:04:12.855 "task_count": 2048, 00:04:12.855 "sequence_count": 2048, 00:04:12.855 "buf_count": 2048 00:04:12.855 } 00:04:12.855 } 00:04:12.855 ] 00:04:12.855 }, 00:04:12.855 { 00:04:12.855 "subsystem": "bdev", 00:04:12.855 "config": [ 00:04:12.855 { 00:04:12.855 "method": "bdev_set_options", 00:04:12.855 "params": { 00:04:12.855 "bdev_io_pool_size": 65535, 00:04:12.855 "bdev_io_cache_size": 256, 00:04:12.855 "bdev_auto_examine": true, 00:04:12.855 "iobuf_small_cache_size": 128, 00:04:12.855 "iobuf_large_cache_size": 16 00:04:12.855 } 00:04:12.855 }, 00:04:12.855 { 00:04:12.855 "method": "bdev_raid_set_options", 00:04:12.855 "params": { 00:04:12.855 "process_window_size_kb": 1024, 00:04:12.855 "process_max_bandwidth_mb_sec": 0 00:04:12.855 } 00:04:12.855 }, 00:04:12.855 { 00:04:12.855 "method": "bdev_iscsi_set_options", 00:04:12.856 "params": { 00:04:12.856 "timeout_sec": 30 00:04:12.856 } 00:04:12.856 }, 00:04:12.856 { 00:04:12.856 "method": "bdev_nvme_set_options", 00:04:12.856 "params": { 00:04:12.856 "action_on_timeout": "none", 00:04:12.856 "timeout_us": 0, 00:04:12.856 "timeout_admin_us": 0, 00:04:12.856 "keep_alive_timeout_ms": 10000, 00:04:12.856 "arbitration_burst": 0, 00:04:12.856 "low_priority_weight": 0, 00:04:12.856 "medium_priority_weight": 0, 00:04:12.856 "high_priority_weight": 0, 00:04:12.856 "nvme_adminq_poll_period_us": 10000, 00:04:12.856 "nvme_ioq_poll_period_us": 0, 00:04:12.856 "io_queue_requests": 0, 00:04:12.856 "delay_cmd_submit": true, 00:04:12.856 "transport_retry_count": 4, 00:04:12.856 "bdev_retry_count": 3, 00:04:12.856 "transport_ack_timeout": 0, 00:04:12.856 "ctrlr_loss_timeout_sec": 0, 00:04:12.856 "reconnect_delay_sec": 0, 00:04:12.856 "fast_io_fail_timeout_sec": 0, 00:04:12.856 "disable_auto_failback": false, 00:04:12.856 "generate_uuids": false, 00:04:12.856 "transport_tos": 0, 00:04:12.856 "nvme_error_stat": false, 00:04:12.856 "rdma_srq_size": 0, 00:04:12.856 "io_path_stat": false, 00:04:12.856 "allow_accel_sequence": false, 00:04:12.856 "rdma_max_cq_size": 0, 00:04:12.856 "rdma_cm_event_timeout_ms": 0, 00:04:12.856 "dhchap_digests": [ 00:04:12.856 "sha256", 00:04:12.856 "sha384", 00:04:12.856 "sha512" 00:04:12.856 ], 00:04:12.856 "dhchap_dhgroups": [ 00:04:12.856 "null", 00:04:12.856 "ffdhe2048", 00:04:12.856 "ffdhe3072", 00:04:12.856 "ffdhe4096", 00:04:12.856 "ffdhe6144", 00:04:12.856 "ffdhe8192" 00:04:12.856 ], 00:04:12.856 "rdma_umr_per_io": false 00:04:12.856 } 00:04:12.856 }, 00:04:12.856 { 00:04:12.856 "method": "bdev_nvme_set_hotplug", 00:04:12.856 "params": { 00:04:12.856 "period_us": 100000, 00:04:12.856 "enable": false 00:04:12.856 } 00:04:12.856 }, 00:04:12.856 { 00:04:12.856 "method": "bdev_wait_for_examine" 00:04:12.856 } 00:04:12.856 ] 00:04:12.856 }, 00:04:12.856 { 00:04:12.856 "subsystem": "scsi", 00:04:12.856 "config": null 00:04:12.856 }, 00:04:12.856 { 00:04:12.856 "subsystem": "scheduler", 00:04:12.856 "config": [ 00:04:12.856 { 00:04:12.856 "method": "framework_set_scheduler", 00:04:12.856 "params": { 00:04:12.856 "name": "static" 00:04:12.856 } 00:04:12.856 } 00:04:12.856 ] 00:04:12.856 }, 00:04:12.856 { 00:04:12.856 "subsystem": "vhost_scsi", 00:04:12.856 "config": [] 00:04:12.856 }, 00:04:12.856 { 00:04:12.856 "subsystem": "vhost_blk", 00:04:12.856 "config": [] 00:04:12.856 }, 00:04:12.856 { 00:04:12.856 "subsystem": "ublk", 00:04:12.856 "config": [] 00:04:12.856 }, 00:04:12.856 { 00:04:12.856 "subsystem": "nbd", 00:04:12.856 "config": [] 00:04:12.856 }, 00:04:12.856 { 00:04:12.856 "subsystem": "nvmf", 00:04:12.856 "config": [ 00:04:12.856 { 00:04:12.856 "method": "nvmf_set_config", 00:04:12.856 "params": { 00:04:12.856 "discovery_filter": "match_any", 00:04:12.856 "admin_cmd_passthru": { 00:04:12.856 "identify_ctrlr": false 00:04:12.856 }, 00:04:12.856 "dhchap_digests": [ 00:04:12.856 "sha256", 00:04:12.856 "sha384", 00:04:12.856 "sha512" 00:04:12.856 ], 00:04:12.856 "dhchap_dhgroups": [ 00:04:12.856 "null", 00:04:12.856 "ffdhe2048", 00:04:12.856 "ffdhe3072", 00:04:12.856 "ffdhe4096", 00:04:12.856 "ffdhe6144", 00:04:12.856 "ffdhe8192" 00:04:12.856 ] 00:04:12.856 } 00:04:12.856 }, 00:04:12.856 { 00:04:12.856 "method": "nvmf_set_max_subsystems", 00:04:12.856 "params": { 00:04:12.856 "max_subsystems": 1024 00:04:12.856 } 00:04:12.856 }, 00:04:12.856 { 00:04:12.856 "method": "nvmf_set_crdt", 00:04:12.856 "params": { 00:04:12.856 "crdt1": 0, 00:04:12.856 "crdt2": 0, 00:04:12.856 "crdt3": 0 00:04:12.856 } 00:04:12.856 }, 00:04:12.856 { 00:04:12.856 "method": "nvmf_create_transport", 00:04:12.856 "params": { 00:04:12.856 "trtype": "TCP", 00:04:12.856 "max_queue_depth": 128, 00:04:12.856 "max_io_qpairs_per_ctrlr": 127, 00:04:12.856 "in_capsule_data_size": 4096, 00:04:12.856 "max_io_size": 131072, 00:04:12.856 "io_unit_size": 131072, 00:04:12.856 "max_aq_depth": 128, 00:04:12.856 "num_shared_buffers": 511, 00:04:12.856 "buf_cache_size": 4294967295, 00:04:12.856 "dif_insert_or_strip": false, 00:04:12.856 "zcopy": false, 00:04:12.856 "c2h_success": true, 00:04:12.856 "sock_priority": 0, 00:04:12.856 "abort_timeout_sec": 1, 00:04:12.856 "ack_timeout": 0, 00:04:12.856 "data_wr_pool_size": 0 00:04:12.856 } 00:04:12.856 } 00:04:12.856 ] 00:04:12.856 }, 00:04:12.856 { 00:04:12.856 "subsystem": "iscsi", 00:04:12.856 "config": [ 00:04:12.856 { 00:04:12.856 "method": "iscsi_set_options", 00:04:12.856 "params": { 00:04:12.856 "node_base": "iqn.2016-06.io.spdk", 00:04:12.856 "max_sessions": 128, 00:04:12.856 "max_connections_per_session": 2, 00:04:12.856 "max_queue_depth": 64, 00:04:12.856 "default_time2wait": 2, 00:04:12.856 "default_time2retain": 20, 00:04:12.856 "first_burst_length": 8192, 00:04:12.856 "immediate_data": true, 00:04:12.856 "allow_duplicated_isid": false, 00:04:12.856 "error_recovery_level": 0, 00:04:12.856 "nop_timeout": 60, 00:04:12.856 "nop_in_interval": 30, 00:04:12.856 "disable_chap": false, 00:04:12.856 "require_chap": false, 00:04:12.856 "mutual_chap": false, 00:04:12.856 "chap_group": 0, 00:04:12.856 "max_large_datain_per_connection": 64, 00:04:12.856 "max_r2t_per_connection": 4, 00:04:12.856 "pdu_pool_size": 36864, 00:04:12.856 "immediate_data_pool_size": 16384, 00:04:12.856 "data_out_pool_size": 2048 00:04:12.856 } 00:04:12.856 } 00:04:12.856 ] 00:04:12.856 } 00:04:12.856 ] 00:04:12.856 } 00:04:12.856 20:14:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:12.856 20:14:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 59269 00:04:12.856 20:14:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 59269 ']' 00:04:12.856 20:14:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 59269 00:04:12.856 20:14:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:12.856 20:14:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:12.856 20:14:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59269 00:04:12.856 20:14:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:12.856 20:14:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:12.856 killing process with pid 59269 00:04:12.856 20:14:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59269' 00:04:12.856 20:14:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 59269 00:04:12.856 20:14:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 59269 00:04:14.772 20:14:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=59314 00:04:14.772 20:14:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:14.772 20:14:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:20.063 20:15:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 59314 00:04:20.063 20:15:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 59314 ']' 00:04:20.063 20:15:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 59314 00:04:20.063 20:15:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:20.063 20:15:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:20.063 20:15:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59314 00:04:20.063 20:15:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:20.063 20:15:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:20.063 killing process with pid 59314 00:04:20.063 20:15:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59314' 00:04:20.063 20:15:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 59314 00:04:20.063 20:15:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 59314 00:04:20.633 20:15:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:20.633 20:15:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:20.633 00:04:20.633 real 0m9.143s 00:04:20.633 user 0m8.657s 00:04:20.633 sys 0m0.678s 00:04:20.633 20:15:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:20.633 ************************************ 00:04:20.633 END TEST skip_rpc_with_json 00:04:20.633 20:15:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:20.633 ************************************ 00:04:20.633 20:15:04 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:20.633 20:15:04 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:20.633 20:15:04 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:20.633 20:15:04 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:20.633 ************************************ 00:04:20.633 START TEST skip_rpc_with_delay 00:04:20.633 ************************************ 00:04:20.633 20:15:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:20.633 20:15:04 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:20.633 20:15:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:20.633 20:15:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:20.633 20:15:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:20.633 20:15:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:20.633 20:15:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:20.633 20:15:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:20.633 20:15:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:20.633 20:15:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:20.633 20:15:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:20.633 20:15:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:20.633 20:15:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:20.893 [2024-12-12 20:15:04.921540] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:20.893 20:15:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:20.893 20:15:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:20.893 20:15:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:20.893 20:15:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:20.893 00:04:20.893 real 0m0.120s 00:04:20.893 user 0m0.068s 00:04:20.893 sys 0m0.049s 00:04:20.893 20:15:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:20.893 ************************************ 00:04:20.893 END TEST skip_rpc_with_delay 00:04:20.893 20:15:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:20.893 ************************************ 00:04:20.893 20:15:04 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:20.893 20:15:04 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:20.893 20:15:04 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:20.893 20:15:04 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:20.893 20:15:04 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:20.893 20:15:04 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:20.893 ************************************ 00:04:20.893 START TEST exit_on_failed_rpc_init 00:04:20.893 ************************************ 00:04:20.893 20:15:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:20.893 20:15:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=59431 00:04:20.893 20:15:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:20.893 20:15:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 59431 00:04:20.893 20:15:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 59431 ']' 00:04:20.893 20:15:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:20.893 20:15:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:20.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:20.893 20:15:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:20.894 20:15:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:20.894 20:15:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:20.894 [2024-12-12 20:15:05.079177] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:04:20.894 [2024-12-12 20:15:05.079297] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59431 ] 00:04:21.168 [2024-12-12 20:15:05.232824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:21.168 [2024-12-12 20:15:05.315006] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:21.740 20:15:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:21.740 20:15:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:21.740 20:15:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:21.740 20:15:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:21.740 20:15:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:21.740 20:15:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:21.740 20:15:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:21.740 20:15:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:21.740 20:15:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:21.740 20:15:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:21.740 20:15:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:21.740 20:15:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:21.740 20:15:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:21.740 20:15:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:21.740 20:15:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:22.001 [2024-12-12 20:15:05.996310] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:04:22.002 [2024-12-12 20:15:05.996438] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59449 ] 00:04:22.002 [2024-12-12 20:15:06.152680] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:22.263 [2024-12-12 20:15:06.245473] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:04:22.263 [2024-12-12 20:15:06.245541] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:22.263 [2024-12-12 20:15:06.245554] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:22.263 [2024-12-12 20:15:06.245567] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:22.263 20:15:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:22.263 20:15:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:22.263 20:15:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:22.263 20:15:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:22.263 20:15:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:22.263 20:15:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:22.263 20:15:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:22.263 20:15:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 59431 00:04:22.263 20:15:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 59431 ']' 00:04:22.263 20:15:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 59431 00:04:22.263 20:15:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:22.263 20:15:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:22.263 20:15:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59431 00:04:22.263 20:15:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:22.263 killing process with pid 59431 00:04:22.263 20:15:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:22.263 20:15:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59431' 00:04:22.263 20:15:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 59431 00:04:22.263 20:15:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 59431 00:04:23.648 00:04:23.648 real 0m2.615s 00:04:23.648 user 0m2.925s 00:04:23.648 sys 0m0.402s 00:04:23.648 20:15:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:23.648 20:15:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:23.648 ************************************ 00:04:23.648 END TEST exit_on_failed_rpc_init 00:04:23.648 ************************************ 00:04:23.648 20:15:07 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:23.648 00:04:23.648 real 0m18.424s 00:04:23.648 user 0m17.506s 00:04:23.648 sys 0m1.687s 00:04:23.648 20:15:07 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:23.648 20:15:07 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:23.648 ************************************ 00:04:23.648 END TEST skip_rpc 00:04:23.648 ************************************ 00:04:23.648 20:15:07 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:23.648 20:15:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:23.648 20:15:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:23.648 20:15:07 -- common/autotest_common.sh@10 -- # set +x 00:04:23.648 ************************************ 00:04:23.648 START TEST rpc_client 00:04:23.648 ************************************ 00:04:23.648 20:15:07 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:23.648 * Looking for test storage... 00:04:23.648 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:23.648 20:15:07 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:23.648 20:15:07 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:04:23.648 20:15:07 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:23.649 20:15:07 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:23.649 20:15:07 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:23.649 20:15:07 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:23.649 20:15:07 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:23.649 20:15:07 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:23.649 20:15:07 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:23.649 20:15:07 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:23.649 20:15:07 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:23.649 20:15:07 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:23.649 20:15:07 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:23.649 20:15:07 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:23.649 20:15:07 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:23.649 20:15:07 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:23.649 20:15:07 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:23.649 20:15:07 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:23.649 20:15:07 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:23.649 20:15:07 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:23.649 20:15:07 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:23.649 20:15:07 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:23.649 20:15:07 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:23.649 20:15:07 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:23.649 20:15:07 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:23.649 20:15:07 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:23.649 20:15:07 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:23.649 20:15:07 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:23.649 20:15:07 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:23.649 20:15:07 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:23.649 20:15:07 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:23.649 20:15:07 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:23.649 20:15:07 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:23.649 20:15:07 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:23.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.649 --rc genhtml_branch_coverage=1 00:04:23.649 --rc genhtml_function_coverage=1 00:04:23.649 --rc genhtml_legend=1 00:04:23.649 --rc geninfo_all_blocks=1 00:04:23.649 --rc geninfo_unexecuted_blocks=1 00:04:23.649 00:04:23.649 ' 00:04:23.649 20:15:07 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:23.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.649 --rc genhtml_branch_coverage=1 00:04:23.649 --rc genhtml_function_coverage=1 00:04:23.649 --rc genhtml_legend=1 00:04:23.649 --rc geninfo_all_blocks=1 00:04:23.649 --rc geninfo_unexecuted_blocks=1 00:04:23.649 00:04:23.649 ' 00:04:23.649 20:15:07 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:23.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.649 --rc genhtml_branch_coverage=1 00:04:23.649 --rc genhtml_function_coverage=1 00:04:23.649 --rc genhtml_legend=1 00:04:23.649 --rc geninfo_all_blocks=1 00:04:23.649 --rc geninfo_unexecuted_blocks=1 00:04:23.649 00:04:23.649 ' 00:04:23.649 20:15:07 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:23.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.649 --rc genhtml_branch_coverage=1 00:04:23.649 --rc genhtml_function_coverage=1 00:04:23.649 --rc genhtml_legend=1 00:04:23.649 --rc geninfo_all_blocks=1 00:04:23.649 --rc geninfo_unexecuted_blocks=1 00:04:23.649 00:04:23.649 ' 00:04:23.649 20:15:07 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:23.649 OK 00:04:23.649 20:15:07 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:23.649 00:04:23.649 real 0m0.178s 00:04:23.649 user 0m0.113s 00:04:23.649 sys 0m0.074s 00:04:23.649 20:15:07 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:23.649 20:15:07 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:23.649 ************************************ 00:04:23.649 END TEST rpc_client 00:04:23.649 ************************************ 00:04:23.911 20:15:07 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:23.911 20:15:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:23.911 20:15:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:23.911 20:15:07 -- common/autotest_common.sh@10 -- # set +x 00:04:23.911 ************************************ 00:04:23.911 START TEST json_config 00:04:23.911 ************************************ 00:04:23.911 20:15:07 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:23.911 20:15:07 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:23.911 20:15:07 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:04:23.911 20:15:07 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:23.911 20:15:08 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:23.911 20:15:08 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:23.911 20:15:08 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:23.911 20:15:08 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:23.911 20:15:08 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:23.911 20:15:08 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:23.911 20:15:08 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:23.911 20:15:08 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:23.911 20:15:08 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:23.911 20:15:08 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:23.911 20:15:08 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:23.911 20:15:08 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:23.911 20:15:08 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:23.911 20:15:08 json_config -- scripts/common.sh@345 -- # : 1 00:04:23.911 20:15:08 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:23.911 20:15:08 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:23.911 20:15:08 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:23.911 20:15:08 json_config -- scripts/common.sh@353 -- # local d=1 00:04:23.911 20:15:08 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:23.911 20:15:08 json_config -- scripts/common.sh@355 -- # echo 1 00:04:23.911 20:15:08 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:23.911 20:15:08 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:23.911 20:15:08 json_config -- scripts/common.sh@353 -- # local d=2 00:04:23.911 20:15:08 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:23.911 20:15:08 json_config -- scripts/common.sh@355 -- # echo 2 00:04:23.911 20:15:08 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:23.911 20:15:08 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:23.911 20:15:08 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:23.911 20:15:08 json_config -- scripts/common.sh@368 -- # return 0 00:04:23.911 20:15:08 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:23.911 20:15:08 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:23.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.911 --rc genhtml_branch_coverage=1 00:04:23.911 --rc genhtml_function_coverage=1 00:04:23.911 --rc genhtml_legend=1 00:04:23.911 --rc geninfo_all_blocks=1 00:04:23.911 --rc geninfo_unexecuted_blocks=1 00:04:23.911 00:04:23.911 ' 00:04:23.911 20:15:08 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:23.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.911 --rc genhtml_branch_coverage=1 00:04:23.911 --rc genhtml_function_coverage=1 00:04:23.911 --rc genhtml_legend=1 00:04:23.911 --rc geninfo_all_blocks=1 00:04:23.911 --rc geninfo_unexecuted_blocks=1 00:04:23.911 00:04:23.911 ' 00:04:23.911 20:15:08 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:23.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.911 --rc genhtml_branch_coverage=1 00:04:23.911 --rc genhtml_function_coverage=1 00:04:23.911 --rc genhtml_legend=1 00:04:23.911 --rc geninfo_all_blocks=1 00:04:23.911 --rc geninfo_unexecuted_blocks=1 00:04:23.911 00:04:23.911 ' 00:04:23.911 20:15:08 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:23.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.911 --rc genhtml_branch_coverage=1 00:04:23.911 --rc genhtml_function_coverage=1 00:04:23.911 --rc genhtml_legend=1 00:04:23.911 --rc geninfo_all_blocks=1 00:04:23.911 --rc geninfo_unexecuted_blocks=1 00:04:23.911 00:04:23.911 ' 00:04:23.911 20:15:08 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:23.911 20:15:08 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:23.911 20:15:08 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:23.911 20:15:08 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:23.911 20:15:08 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:23.911 20:15:08 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:23.911 20:15:08 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:23.911 20:15:08 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:23.911 20:15:08 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:23.911 20:15:08 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:23.911 20:15:08 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:23.911 20:15:08 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:23.911 20:15:08 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:df74ab9f-c50a-47a3-a4fc-d710e0af4003 00:04:23.911 20:15:08 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=df74ab9f-c50a-47a3-a4fc-d710e0af4003 00:04:23.911 20:15:08 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:23.911 20:15:08 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:23.911 20:15:08 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:23.911 20:15:08 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:23.911 20:15:08 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:23.911 20:15:08 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:23.911 20:15:08 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:23.911 20:15:08 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:23.911 20:15:08 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:23.911 20:15:08 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:23.911 20:15:08 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:23.911 20:15:08 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:23.911 20:15:08 json_config -- paths/export.sh@5 -- # export PATH 00:04:23.911 20:15:08 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:23.911 20:15:08 json_config -- nvmf/common.sh@51 -- # : 0 00:04:23.911 20:15:08 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:23.911 20:15:08 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:23.911 20:15:08 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:23.912 20:15:08 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:23.912 20:15:08 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:23.912 20:15:08 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:23.912 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:23.912 20:15:08 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:23.912 20:15:08 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:23.912 20:15:08 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:23.912 20:15:08 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:23.912 20:15:08 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:23.912 20:15:08 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:23.912 20:15:08 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:23.912 20:15:08 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:23.912 WARNING: No tests are enabled so not running JSON configuration tests 00:04:23.912 20:15:08 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:23.912 20:15:08 json_config -- json_config/json_config.sh@28 -- # exit 0 00:04:23.912 00:04:23.912 real 0m0.137s 00:04:23.912 user 0m0.088s 00:04:23.912 sys 0m0.054s 00:04:23.912 20:15:08 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:23.912 20:15:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:23.912 ************************************ 00:04:23.912 END TEST json_config 00:04:23.912 ************************************ 00:04:23.912 20:15:08 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:23.912 20:15:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:23.912 20:15:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:23.912 20:15:08 -- common/autotest_common.sh@10 -- # set +x 00:04:23.912 ************************************ 00:04:23.912 START TEST json_config_extra_key 00:04:23.912 ************************************ 00:04:23.912 20:15:08 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:23.912 20:15:08 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:23.912 20:15:08 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:23.912 20:15:08 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:04:24.173 20:15:08 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:24.173 20:15:08 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:24.173 20:15:08 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:24.173 20:15:08 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:24.173 20:15:08 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:24.173 20:15:08 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:24.173 20:15:08 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:24.173 20:15:08 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:24.173 20:15:08 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:24.173 20:15:08 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:24.173 20:15:08 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:24.173 20:15:08 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:24.173 20:15:08 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:24.173 20:15:08 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:24.173 20:15:08 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:24.173 20:15:08 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:24.173 20:15:08 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:24.173 20:15:08 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:24.173 20:15:08 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:24.173 20:15:08 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:24.173 20:15:08 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:24.173 20:15:08 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:24.173 20:15:08 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:24.173 20:15:08 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:24.173 20:15:08 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:24.173 20:15:08 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:24.173 20:15:08 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:24.173 20:15:08 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:24.173 20:15:08 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:24.173 20:15:08 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:24.173 20:15:08 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:24.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.173 --rc genhtml_branch_coverage=1 00:04:24.173 --rc genhtml_function_coverage=1 00:04:24.173 --rc genhtml_legend=1 00:04:24.173 --rc geninfo_all_blocks=1 00:04:24.173 --rc geninfo_unexecuted_blocks=1 00:04:24.173 00:04:24.173 ' 00:04:24.173 20:15:08 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:24.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.173 --rc genhtml_branch_coverage=1 00:04:24.173 --rc genhtml_function_coverage=1 00:04:24.173 --rc genhtml_legend=1 00:04:24.173 --rc geninfo_all_blocks=1 00:04:24.173 --rc geninfo_unexecuted_blocks=1 00:04:24.173 00:04:24.173 ' 00:04:24.173 20:15:08 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:24.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.173 --rc genhtml_branch_coverage=1 00:04:24.173 --rc genhtml_function_coverage=1 00:04:24.173 --rc genhtml_legend=1 00:04:24.173 --rc geninfo_all_blocks=1 00:04:24.173 --rc geninfo_unexecuted_blocks=1 00:04:24.173 00:04:24.173 ' 00:04:24.173 20:15:08 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:24.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.173 --rc genhtml_branch_coverage=1 00:04:24.173 --rc genhtml_function_coverage=1 00:04:24.173 --rc genhtml_legend=1 00:04:24.173 --rc geninfo_all_blocks=1 00:04:24.173 --rc geninfo_unexecuted_blocks=1 00:04:24.173 00:04:24.173 ' 00:04:24.173 20:15:08 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:24.173 20:15:08 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:24.173 20:15:08 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:24.173 20:15:08 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:24.173 20:15:08 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:24.173 20:15:08 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:24.173 20:15:08 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:24.173 20:15:08 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:24.173 20:15:08 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:24.173 20:15:08 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:24.173 20:15:08 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:24.173 20:15:08 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:24.173 20:15:08 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:df74ab9f-c50a-47a3-a4fc-d710e0af4003 00:04:24.173 20:15:08 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=df74ab9f-c50a-47a3-a4fc-d710e0af4003 00:04:24.173 20:15:08 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:24.173 20:15:08 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:24.173 20:15:08 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:24.173 20:15:08 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:24.173 20:15:08 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:24.173 20:15:08 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:24.173 20:15:08 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:24.173 20:15:08 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:24.173 20:15:08 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:24.173 20:15:08 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.174 20:15:08 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.174 20:15:08 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.174 20:15:08 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:24.174 20:15:08 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.174 20:15:08 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:24.174 20:15:08 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:24.174 20:15:08 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:24.174 20:15:08 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:24.174 20:15:08 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:24.174 20:15:08 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:24.174 20:15:08 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:24.174 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:24.174 20:15:08 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:24.174 20:15:08 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:24.174 20:15:08 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:24.174 20:15:08 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:24.174 20:15:08 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:24.174 20:15:08 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:24.174 20:15:08 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:24.174 20:15:08 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:24.174 20:15:08 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:24.174 20:15:08 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:24.174 20:15:08 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:24.174 20:15:08 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:24.174 20:15:08 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:24.174 20:15:08 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:24.174 INFO: launching applications... 00:04:24.174 20:15:08 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:24.174 20:15:08 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:24.174 20:15:08 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:24.174 20:15:08 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:24.174 20:15:08 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:24.174 20:15:08 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:24.174 20:15:08 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:24.174 20:15:08 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:24.174 20:15:08 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=59637 00:04:24.174 20:15:08 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:24.174 Waiting for target to run... 00:04:24.174 20:15:08 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 59637 /var/tmp/spdk_tgt.sock 00:04:24.174 20:15:08 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 59637 ']' 00:04:24.174 20:15:08 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:24.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:24.174 20:15:08 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:24.174 20:15:08 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:24.174 20:15:08 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:24.174 20:15:08 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:24.174 20:15:08 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:24.174 [2024-12-12 20:15:08.312472] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:04:24.174 [2024-12-12 20:15:08.312583] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59637 ] 00:04:24.434 [2024-12-12 20:15:08.640675] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:24.694 [2024-12-12 20:15:08.732593] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.265 00:04:25.265 INFO: shutting down applications... 00:04:25.265 20:15:09 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:25.265 20:15:09 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:25.265 20:15:09 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:25.265 20:15:09 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:25.265 20:15:09 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:25.265 20:15:09 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:25.265 20:15:09 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:25.265 20:15:09 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 59637 ]] 00:04:25.265 20:15:09 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 59637 00:04:25.266 20:15:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:25.266 20:15:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:25.266 20:15:09 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59637 00:04:25.266 20:15:09 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:25.526 20:15:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:25.526 20:15:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:25.526 20:15:09 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59637 00:04:25.526 20:15:09 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:26.099 20:15:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:26.099 20:15:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:26.099 20:15:10 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59637 00:04:26.099 20:15:10 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:26.670 20:15:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:26.670 20:15:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:26.670 20:15:10 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59637 00:04:26.670 20:15:10 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:27.241 20:15:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:27.242 20:15:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:27.242 20:15:11 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59637 00:04:27.242 20:15:11 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:27.242 20:15:11 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:27.242 SPDK target shutdown done 00:04:27.242 Success 00:04:27.242 20:15:11 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:27.242 20:15:11 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:27.242 20:15:11 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:27.242 00:04:27.242 real 0m3.161s 00:04:27.242 user 0m2.757s 00:04:27.242 sys 0m0.419s 00:04:27.242 ************************************ 00:04:27.242 END TEST json_config_extra_key 00:04:27.242 ************************************ 00:04:27.242 20:15:11 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:27.242 20:15:11 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:27.242 20:15:11 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:27.242 20:15:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:27.242 20:15:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:27.242 20:15:11 -- common/autotest_common.sh@10 -- # set +x 00:04:27.242 ************************************ 00:04:27.242 START TEST alias_rpc 00:04:27.242 ************************************ 00:04:27.242 20:15:11 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:27.242 * Looking for test storage... 00:04:27.242 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:27.242 20:15:11 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:27.242 20:15:11 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:27.242 20:15:11 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:27.242 20:15:11 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:27.242 20:15:11 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:27.242 20:15:11 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:27.242 20:15:11 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:27.242 20:15:11 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:27.242 20:15:11 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:27.242 20:15:11 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:27.242 20:15:11 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:27.242 20:15:11 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:27.242 20:15:11 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:27.242 20:15:11 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:27.242 20:15:11 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:27.242 20:15:11 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:27.242 20:15:11 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:27.242 20:15:11 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:27.242 20:15:11 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:27.242 20:15:11 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:27.242 20:15:11 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:27.242 20:15:11 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:27.242 20:15:11 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:27.242 20:15:11 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:27.242 20:15:11 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:27.242 20:15:11 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:27.242 20:15:11 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:27.242 20:15:11 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:27.242 20:15:11 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:27.242 20:15:11 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:27.242 20:15:11 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:27.242 20:15:11 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:27.242 20:15:11 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:27.242 20:15:11 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:27.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.242 --rc genhtml_branch_coverage=1 00:04:27.242 --rc genhtml_function_coverage=1 00:04:27.242 --rc genhtml_legend=1 00:04:27.242 --rc geninfo_all_blocks=1 00:04:27.242 --rc geninfo_unexecuted_blocks=1 00:04:27.242 00:04:27.242 ' 00:04:27.242 20:15:11 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:27.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.242 --rc genhtml_branch_coverage=1 00:04:27.242 --rc genhtml_function_coverage=1 00:04:27.242 --rc genhtml_legend=1 00:04:27.242 --rc geninfo_all_blocks=1 00:04:27.242 --rc geninfo_unexecuted_blocks=1 00:04:27.242 00:04:27.242 ' 00:04:27.242 20:15:11 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:27.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.242 --rc genhtml_branch_coverage=1 00:04:27.242 --rc genhtml_function_coverage=1 00:04:27.242 --rc genhtml_legend=1 00:04:27.242 --rc geninfo_all_blocks=1 00:04:27.242 --rc geninfo_unexecuted_blocks=1 00:04:27.242 00:04:27.242 ' 00:04:27.242 20:15:11 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:27.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.242 --rc genhtml_branch_coverage=1 00:04:27.242 --rc genhtml_function_coverage=1 00:04:27.242 --rc genhtml_legend=1 00:04:27.242 --rc geninfo_all_blocks=1 00:04:27.242 --rc geninfo_unexecuted_blocks=1 00:04:27.242 00:04:27.242 ' 00:04:27.242 20:15:11 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:27.242 20:15:11 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=59735 00:04:27.242 20:15:11 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 59735 00:04:27.242 20:15:11 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 59735 ']' 00:04:27.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:27.242 20:15:11 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:27.242 20:15:11 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:27.242 20:15:11 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:27.242 20:15:11 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:27.242 20:15:11 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.242 20:15:11 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:27.503 [2024-12-12 20:15:11.547759] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:04:27.503 [2024-12-12 20:15:11.547914] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59735 ] 00:04:27.503 [2024-12-12 20:15:11.708478] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:27.764 [2024-12-12 20:15:11.832874] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.337 20:15:12 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:28.337 20:15:12 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:28.337 20:15:12 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:28.598 20:15:12 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 59735 00:04:28.598 20:15:12 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 59735 ']' 00:04:28.598 20:15:12 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 59735 00:04:28.598 20:15:12 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:28.598 20:15:12 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:28.598 20:15:12 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59735 00:04:28.598 20:15:12 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:28.598 20:15:12 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:28.598 killing process with pid 59735 00:04:28.598 20:15:12 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59735' 00:04:28.598 20:15:12 alias_rpc -- common/autotest_common.sh@973 -- # kill 59735 00:04:28.598 20:15:12 alias_rpc -- common/autotest_common.sh@978 -- # wait 59735 00:04:30.563 00:04:30.563 real 0m3.029s 00:04:30.563 user 0m3.107s 00:04:30.563 sys 0m0.459s 00:04:30.563 20:15:14 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:30.563 20:15:14 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.563 ************************************ 00:04:30.563 END TEST alias_rpc 00:04:30.563 ************************************ 00:04:30.563 20:15:14 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:30.563 20:15:14 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:30.563 20:15:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:30.563 20:15:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:30.563 20:15:14 -- common/autotest_common.sh@10 -- # set +x 00:04:30.563 ************************************ 00:04:30.563 START TEST spdkcli_tcp 00:04:30.563 ************************************ 00:04:30.563 20:15:14 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:30.563 * Looking for test storage... 00:04:30.563 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:30.563 20:15:14 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:30.563 20:15:14 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:30.563 20:15:14 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:04:30.563 20:15:14 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:30.563 20:15:14 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:30.563 20:15:14 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:30.563 20:15:14 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:30.563 20:15:14 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:30.563 20:15:14 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:30.563 20:15:14 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:30.563 20:15:14 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:30.563 20:15:14 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:30.563 20:15:14 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:30.563 20:15:14 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:30.563 20:15:14 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:30.563 20:15:14 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:30.563 20:15:14 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:30.563 20:15:14 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:30.563 20:15:14 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:30.563 20:15:14 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:30.563 20:15:14 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:30.563 20:15:14 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:30.563 20:15:14 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:30.563 20:15:14 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:30.563 20:15:14 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:30.563 20:15:14 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:30.563 20:15:14 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:30.563 20:15:14 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:30.563 20:15:14 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:30.563 20:15:14 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:30.563 20:15:14 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:30.563 20:15:14 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:30.563 20:15:14 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:30.563 20:15:14 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:30.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.563 --rc genhtml_branch_coverage=1 00:04:30.563 --rc genhtml_function_coverage=1 00:04:30.563 --rc genhtml_legend=1 00:04:30.563 --rc geninfo_all_blocks=1 00:04:30.563 --rc geninfo_unexecuted_blocks=1 00:04:30.563 00:04:30.563 ' 00:04:30.563 20:15:14 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:30.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.563 --rc genhtml_branch_coverage=1 00:04:30.563 --rc genhtml_function_coverage=1 00:04:30.563 --rc genhtml_legend=1 00:04:30.563 --rc geninfo_all_blocks=1 00:04:30.563 --rc geninfo_unexecuted_blocks=1 00:04:30.563 00:04:30.563 ' 00:04:30.563 20:15:14 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:30.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.563 --rc genhtml_branch_coverage=1 00:04:30.563 --rc genhtml_function_coverage=1 00:04:30.563 --rc genhtml_legend=1 00:04:30.563 --rc geninfo_all_blocks=1 00:04:30.563 --rc geninfo_unexecuted_blocks=1 00:04:30.563 00:04:30.563 ' 00:04:30.563 20:15:14 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:30.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.563 --rc genhtml_branch_coverage=1 00:04:30.563 --rc genhtml_function_coverage=1 00:04:30.563 --rc genhtml_legend=1 00:04:30.563 --rc geninfo_all_blocks=1 00:04:30.563 --rc geninfo_unexecuted_blocks=1 00:04:30.563 00:04:30.563 ' 00:04:30.563 20:15:14 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:30.563 20:15:14 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:30.563 20:15:14 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:30.563 20:15:14 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:30.563 20:15:14 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:30.563 20:15:14 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:30.563 20:15:14 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:30.563 20:15:14 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:30.563 20:15:14 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:30.563 20:15:14 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=59830 00:04:30.563 20:15:14 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 59830 00:04:30.563 20:15:14 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 59830 ']' 00:04:30.563 20:15:14 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:30.563 20:15:14 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:30.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:30.563 20:15:14 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:30.563 20:15:14 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:30.563 20:15:14 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:30.563 20:15:14 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:30.563 [2024-12-12 20:15:14.604183] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:04:30.563 [2024-12-12 20:15:14.604302] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59830 ] 00:04:30.563 [2024-12-12 20:15:14.756501] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:30.825 [2024-12-12 20:15:14.857350] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.825 [2024-12-12 20:15:14.857360] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:04:31.395 20:15:15 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:31.395 20:15:15 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:31.395 20:15:15 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=59843 00:04:31.395 20:15:15 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:31.395 20:15:15 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:31.656 [ 00:04:31.656 "bdev_malloc_delete", 00:04:31.656 "bdev_malloc_create", 00:04:31.656 "bdev_null_resize", 00:04:31.656 "bdev_null_delete", 00:04:31.656 "bdev_null_create", 00:04:31.656 "bdev_nvme_cuse_unregister", 00:04:31.656 "bdev_nvme_cuse_register", 00:04:31.656 "bdev_opal_new_user", 00:04:31.656 "bdev_opal_set_lock_state", 00:04:31.656 "bdev_opal_delete", 00:04:31.656 "bdev_opal_get_info", 00:04:31.656 "bdev_opal_create", 00:04:31.656 "bdev_nvme_opal_revert", 00:04:31.656 "bdev_nvme_opal_init", 00:04:31.656 "bdev_nvme_send_cmd", 00:04:31.656 "bdev_nvme_set_keys", 00:04:31.656 "bdev_nvme_get_path_iostat", 00:04:31.656 "bdev_nvme_get_mdns_discovery_info", 00:04:31.656 "bdev_nvme_stop_mdns_discovery", 00:04:31.656 "bdev_nvme_start_mdns_discovery", 00:04:31.656 "bdev_nvme_set_multipath_policy", 00:04:31.656 "bdev_nvme_set_preferred_path", 00:04:31.656 "bdev_nvme_get_io_paths", 00:04:31.656 "bdev_nvme_remove_error_injection", 00:04:31.656 "bdev_nvme_add_error_injection", 00:04:31.656 "bdev_nvme_get_discovery_info", 00:04:31.656 "bdev_nvme_stop_discovery", 00:04:31.656 "bdev_nvme_start_discovery", 00:04:31.656 "bdev_nvme_get_controller_health_info", 00:04:31.656 "bdev_nvme_disable_controller", 00:04:31.656 "bdev_nvme_enable_controller", 00:04:31.656 "bdev_nvme_reset_controller", 00:04:31.656 "bdev_nvme_get_transport_statistics", 00:04:31.656 "bdev_nvme_apply_firmware", 00:04:31.656 "bdev_nvme_detach_controller", 00:04:31.656 "bdev_nvme_get_controllers", 00:04:31.656 "bdev_nvme_attach_controller", 00:04:31.656 "bdev_nvme_set_hotplug", 00:04:31.656 "bdev_nvme_set_options", 00:04:31.656 "bdev_passthru_delete", 00:04:31.656 "bdev_passthru_create", 00:04:31.656 "bdev_lvol_set_parent_bdev", 00:04:31.656 "bdev_lvol_set_parent", 00:04:31.656 "bdev_lvol_check_shallow_copy", 00:04:31.656 "bdev_lvol_start_shallow_copy", 00:04:31.656 "bdev_lvol_grow_lvstore", 00:04:31.656 "bdev_lvol_get_lvols", 00:04:31.656 "bdev_lvol_get_lvstores", 00:04:31.656 "bdev_lvol_delete", 00:04:31.656 "bdev_lvol_set_read_only", 00:04:31.656 "bdev_lvol_resize", 00:04:31.656 "bdev_lvol_decouple_parent", 00:04:31.656 "bdev_lvol_inflate", 00:04:31.656 "bdev_lvol_rename", 00:04:31.656 "bdev_lvol_clone_bdev", 00:04:31.656 "bdev_lvol_clone", 00:04:31.656 "bdev_lvol_snapshot", 00:04:31.656 "bdev_lvol_create", 00:04:31.656 "bdev_lvol_delete_lvstore", 00:04:31.656 "bdev_lvol_rename_lvstore", 00:04:31.656 "bdev_lvol_create_lvstore", 00:04:31.656 "bdev_raid_set_options", 00:04:31.656 "bdev_raid_remove_base_bdev", 00:04:31.656 "bdev_raid_add_base_bdev", 00:04:31.656 "bdev_raid_delete", 00:04:31.656 "bdev_raid_create", 00:04:31.656 "bdev_raid_get_bdevs", 00:04:31.656 "bdev_error_inject_error", 00:04:31.656 "bdev_error_delete", 00:04:31.656 "bdev_error_create", 00:04:31.656 "bdev_split_delete", 00:04:31.656 "bdev_split_create", 00:04:31.656 "bdev_delay_delete", 00:04:31.656 "bdev_delay_create", 00:04:31.656 "bdev_delay_update_latency", 00:04:31.656 "bdev_zone_block_delete", 00:04:31.656 "bdev_zone_block_create", 00:04:31.656 "blobfs_create", 00:04:31.656 "blobfs_detect", 00:04:31.656 "blobfs_set_cache_size", 00:04:31.656 "bdev_xnvme_delete", 00:04:31.656 "bdev_xnvme_create", 00:04:31.656 "bdev_aio_delete", 00:04:31.656 "bdev_aio_rescan", 00:04:31.656 "bdev_aio_create", 00:04:31.656 "bdev_ftl_set_property", 00:04:31.656 "bdev_ftl_get_properties", 00:04:31.656 "bdev_ftl_get_stats", 00:04:31.656 "bdev_ftl_unmap", 00:04:31.656 "bdev_ftl_unload", 00:04:31.656 "bdev_ftl_delete", 00:04:31.656 "bdev_ftl_load", 00:04:31.657 "bdev_ftl_create", 00:04:31.657 "bdev_virtio_attach_controller", 00:04:31.657 "bdev_virtio_scsi_get_devices", 00:04:31.657 "bdev_virtio_detach_controller", 00:04:31.657 "bdev_virtio_blk_set_hotplug", 00:04:31.657 "bdev_iscsi_delete", 00:04:31.657 "bdev_iscsi_create", 00:04:31.657 "bdev_iscsi_set_options", 00:04:31.657 "accel_error_inject_error", 00:04:31.657 "ioat_scan_accel_module", 00:04:31.657 "dsa_scan_accel_module", 00:04:31.657 "iaa_scan_accel_module", 00:04:31.657 "keyring_file_remove_key", 00:04:31.657 "keyring_file_add_key", 00:04:31.657 "keyring_linux_set_options", 00:04:31.657 "fsdev_aio_delete", 00:04:31.657 "fsdev_aio_create", 00:04:31.657 "iscsi_get_histogram", 00:04:31.657 "iscsi_enable_histogram", 00:04:31.657 "iscsi_set_options", 00:04:31.657 "iscsi_get_auth_groups", 00:04:31.657 "iscsi_auth_group_remove_secret", 00:04:31.657 "iscsi_auth_group_add_secret", 00:04:31.657 "iscsi_delete_auth_group", 00:04:31.657 "iscsi_create_auth_group", 00:04:31.657 "iscsi_set_discovery_auth", 00:04:31.657 "iscsi_get_options", 00:04:31.657 "iscsi_target_node_request_logout", 00:04:31.657 "iscsi_target_node_set_redirect", 00:04:31.657 "iscsi_target_node_set_auth", 00:04:31.657 "iscsi_target_node_add_lun", 00:04:31.657 "iscsi_get_stats", 00:04:31.657 "iscsi_get_connections", 00:04:31.657 "iscsi_portal_group_set_auth", 00:04:31.657 "iscsi_start_portal_group", 00:04:31.657 "iscsi_delete_portal_group", 00:04:31.657 "iscsi_create_portal_group", 00:04:31.657 "iscsi_get_portal_groups", 00:04:31.657 "iscsi_delete_target_node", 00:04:31.657 "iscsi_target_node_remove_pg_ig_maps", 00:04:31.657 "iscsi_target_node_add_pg_ig_maps", 00:04:31.657 "iscsi_create_target_node", 00:04:31.657 "iscsi_get_target_nodes", 00:04:31.657 "iscsi_delete_initiator_group", 00:04:31.657 "iscsi_initiator_group_remove_initiators", 00:04:31.657 "iscsi_initiator_group_add_initiators", 00:04:31.657 "iscsi_create_initiator_group", 00:04:31.657 "iscsi_get_initiator_groups", 00:04:31.657 "nvmf_set_crdt", 00:04:31.657 "nvmf_set_config", 00:04:31.657 "nvmf_set_max_subsystems", 00:04:31.657 "nvmf_stop_mdns_prr", 00:04:31.657 "nvmf_publish_mdns_prr", 00:04:31.657 "nvmf_subsystem_get_listeners", 00:04:31.657 "nvmf_subsystem_get_qpairs", 00:04:31.657 "nvmf_subsystem_get_controllers", 00:04:31.657 "nvmf_get_stats", 00:04:31.657 "nvmf_get_transports", 00:04:31.657 "nvmf_create_transport", 00:04:31.657 "nvmf_get_targets", 00:04:31.657 "nvmf_delete_target", 00:04:31.657 "nvmf_create_target", 00:04:31.657 "nvmf_subsystem_allow_any_host", 00:04:31.657 "nvmf_subsystem_set_keys", 00:04:31.657 "nvmf_subsystem_remove_host", 00:04:31.657 "nvmf_subsystem_add_host", 00:04:31.657 "nvmf_ns_remove_host", 00:04:31.657 "nvmf_ns_add_host", 00:04:31.657 "nvmf_subsystem_remove_ns", 00:04:31.657 "nvmf_subsystem_set_ns_ana_group", 00:04:31.657 "nvmf_subsystem_add_ns", 00:04:31.657 "nvmf_subsystem_listener_set_ana_state", 00:04:31.657 "nvmf_discovery_get_referrals", 00:04:31.657 "nvmf_discovery_remove_referral", 00:04:31.657 "nvmf_discovery_add_referral", 00:04:31.657 "nvmf_subsystem_remove_listener", 00:04:31.657 "nvmf_subsystem_add_listener", 00:04:31.657 "nvmf_delete_subsystem", 00:04:31.657 "nvmf_create_subsystem", 00:04:31.657 "nvmf_get_subsystems", 00:04:31.657 "env_dpdk_get_mem_stats", 00:04:31.657 "nbd_get_disks", 00:04:31.657 "nbd_stop_disk", 00:04:31.657 "nbd_start_disk", 00:04:31.657 "ublk_recover_disk", 00:04:31.657 "ublk_get_disks", 00:04:31.657 "ublk_stop_disk", 00:04:31.657 "ublk_start_disk", 00:04:31.657 "ublk_destroy_target", 00:04:31.657 "ublk_create_target", 00:04:31.657 "virtio_blk_create_transport", 00:04:31.657 "virtio_blk_get_transports", 00:04:31.657 "vhost_controller_set_coalescing", 00:04:31.657 "vhost_get_controllers", 00:04:31.657 "vhost_delete_controller", 00:04:31.657 "vhost_create_blk_controller", 00:04:31.657 "vhost_scsi_controller_remove_target", 00:04:31.657 "vhost_scsi_controller_add_target", 00:04:31.657 "vhost_start_scsi_controller", 00:04:31.657 "vhost_create_scsi_controller", 00:04:31.657 "thread_set_cpumask", 00:04:31.657 "scheduler_set_options", 00:04:31.657 "framework_get_governor", 00:04:31.657 "framework_get_scheduler", 00:04:31.657 "framework_set_scheduler", 00:04:31.657 "framework_get_reactors", 00:04:31.657 "thread_get_io_channels", 00:04:31.657 "thread_get_pollers", 00:04:31.657 "thread_get_stats", 00:04:31.657 "framework_monitor_context_switch", 00:04:31.657 "spdk_kill_instance", 00:04:31.657 "log_enable_timestamps", 00:04:31.657 "log_get_flags", 00:04:31.657 "log_clear_flag", 00:04:31.657 "log_set_flag", 00:04:31.657 "log_get_level", 00:04:31.657 "log_set_level", 00:04:31.657 "log_get_print_level", 00:04:31.657 "log_set_print_level", 00:04:31.657 "framework_enable_cpumask_locks", 00:04:31.657 "framework_disable_cpumask_locks", 00:04:31.657 "framework_wait_init", 00:04:31.657 "framework_start_init", 00:04:31.657 "scsi_get_devices", 00:04:31.657 "bdev_get_histogram", 00:04:31.657 "bdev_enable_histogram", 00:04:31.657 "bdev_set_qos_limit", 00:04:31.657 "bdev_set_qd_sampling_period", 00:04:31.657 "bdev_get_bdevs", 00:04:31.657 "bdev_reset_iostat", 00:04:31.657 "bdev_get_iostat", 00:04:31.657 "bdev_examine", 00:04:31.657 "bdev_wait_for_examine", 00:04:31.657 "bdev_set_options", 00:04:31.657 "accel_get_stats", 00:04:31.657 "accel_set_options", 00:04:31.657 "accel_set_driver", 00:04:31.657 "accel_crypto_key_destroy", 00:04:31.657 "accel_crypto_keys_get", 00:04:31.657 "accel_crypto_key_create", 00:04:31.657 "accel_assign_opc", 00:04:31.657 "accel_get_module_info", 00:04:31.657 "accel_get_opc_assignments", 00:04:31.657 "vmd_rescan", 00:04:31.657 "vmd_remove_device", 00:04:31.657 "vmd_enable", 00:04:31.657 "sock_get_default_impl", 00:04:31.657 "sock_set_default_impl", 00:04:31.657 "sock_impl_set_options", 00:04:31.657 "sock_impl_get_options", 00:04:31.657 "iobuf_get_stats", 00:04:31.657 "iobuf_set_options", 00:04:31.657 "keyring_get_keys", 00:04:31.657 "framework_get_pci_devices", 00:04:31.657 "framework_get_config", 00:04:31.657 "framework_get_subsystems", 00:04:31.657 "fsdev_set_opts", 00:04:31.657 "fsdev_get_opts", 00:04:31.657 "trace_get_info", 00:04:31.657 "trace_get_tpoint_group_mask", 00:04:31.657 "trace_disable_tpoint_group", 00:04:31.657 "trace_enable_tpoint_group", 00:04:31.657 "trace_clear_tpoint_mask", 00:04:31.657 "trace_set_tpoint_mask", 00:04:31.657 "notify_get_notifications", 00:04:31.657 "notify_get_types", 00:04:31.657 "spdk_get_version", 00:04:31.657 "rpc_get_methods" 00:04:31.657 ] 00:04:31.657 20:15:15 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:31.657 20:15:15 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:31.657 20:15:15 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:31.657 20:15:15 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:31.657 20:15:15 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 59830 00:04:31.657 20:15:15 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 59830 ']' 00:04:31.657 20:15:15 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 59830 00:04:31.657 20:15:15 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:31.657 20:15:15 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:31.657 20:15:15 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59830 00:04:31.657 20:15:15 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:31.657 20:15:15 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:31.657 20:15:15 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59830' 00:04:31.657 killing process with pid 59830 00:04:31.657 20:15:15 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 59830 00:04:31.657 20:15:15 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 59830 00:04:33.573 00:04:33.573 real 0m2.915s 00:04:33.573 user 0m5.239s 00:04:33.573 sys 0m0.430s 00:04:33.573 20:15:17 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:33.573 ************************************ 00:04:33.573 END TEST spdkcli_tcp 00:04:33.573 20:15:17 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:33.573 ************************************ 00:04:33.573 20:15:17 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:33.573 20:15:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:33.573 20:15:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:33.573 20:15:17 -- common/autotest_common.sh@10 -- # set +x 00:04:33.573 ************************************ 00:04:33.573 START TEST dpdk_mem_utility 00:04:33.573 ************************************ 00:04:33.573 20:15:17 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:33.573 * Looking for test storage... 00:04:33.573 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:33.573 20:15:17 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:33.573 20:15:17 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:04:33.573 20:15:17 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:33.573 20:15:17 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:33.573 20:15:17 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:33.573 20:15:17 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:33.573 20:15:17 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:33.573 20:15:17 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:33.573 20:15:17 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:33.573 20:15:17 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:33.573 20:15:17 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:33.573 20:15:17 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:33.573 20:15:17 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:33.573 20:15:17 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:33.573 20:15:17 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:33.573 20:15:17 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:33.573 20:15:17 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:33.573 20:15:17 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:33.573 20:15:17 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:33.573 20:15:17 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:33.573 20:15:17 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:33.573 20:15:17 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:33.573 20:15:17 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:33.573 20:15:17 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:33.573 20:15:17 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:33.573 20:15:17 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:33.573 20:15:17 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:33.573 20:15:17 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:33.573 20:15:17 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:33.573 20:15:17 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:33.573 20:15:17 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:33.573 20:15:17 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:33.573 20:15:17 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:33.573 20:15:17 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:33.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.573 --rc genhtml_branch_coverage=1 00:04:33.573 --rc genhtml_function_coverage=1 00:04:33.573 --rc genhtml_legend=1 00:04:33.573 --rc geninfo_all_blocks=1 00:04:33.573 --rc geninfo_unexecuted_blocks=1 00:04:33.573 00:04:33.573 ' 00:04:33.574 20:15:17 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:33.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.574 --rc genhtml_branch_coverage=1 00:04:33.574 --rc genhtml_function_coverage=1 00:04:33.574 --rc genhtml_legend=1 00:04:33.574 --rc geninfo_all_blocks=1 00:04:33.574 --rc geninfo_unexecuted_blocks=1 00:04:33.574 00:04:33.574 ' 00:04:33.574 20:15:17 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:33.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.574 --rc genhtml_branch_coverage=1 00:04:33.574 --rc genhtml_function_coverage=1 00:04:33.574 --rc genhtml_legend=1 00:04:33.574 --rc geninfo_all_blocks=1 00:04:33.574 --rc geninfo_unexecuted_blocks=1 00:04:33.574 00:04:33.574 ' 00:04:33.574 20:15:17 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:33.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.574 --rc genhtml_branch_coverage=1 00:04:33.574 --rc genhtml_function_coverage=1 00:04:33.574 --rc genhtml_legend=1 00:04:33.574 --rc geninfo_all_blocks=1 00:04:33.574 --rc geninfo_unexecuted_blocks=1 00:04:33.574 00:04:33.574 ' 00:04:33.574 20:15:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:33.574 20:15:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59937 00:04:33.574 20:15:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59937 00:04:33.574 20:15:17 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 59937 ']' 00:04:33.574 20:15:17 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:33.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:33.574 20:15:17 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:33.574 20:15:17 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:33.574 20:15:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:33.574 20:15:17 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:33.574 20:15:17 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:33.574 [2024-12-12 20:15:17.586201] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:04:33.574 [2024-12-12 20:15:17.586333] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59937 ] 00:04:33.574 [2024-12-12 20:15:17.745624] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:33.834 [2024-12-12 20:15:17.903694] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.779 20:15:18 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:34.779 20:15:18 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:34.779 20:15:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:34.779 20:15:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:34.779 20:15:18 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.779 20:15:18 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:34.779 { 00:04:34.779 "filename": "/tmp/spdk_mem_dump.txt" 00:04:34.779 } 00:04:34.779 20:15:18 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.779 20:15:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:34.779 DPDK memory size 824.000000 MiB in 1 heap(s) 00:04:34.779 1 heaps totaling size 824.000000 MiB 00:04:34.779 size: 824.000000 MiB heap id: 0 00:04:34.779 end heaps---------- 00:04:34.779 9 mempools totaling size 603.782043 MiB 00:04:34.779 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:34.779 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:34.779 size: 100.555481 MiB name: bdev_io_59937 00:04:34.779 size: 50.003479 MiB name: msgpool_59937 00:04:34.779 size: 36.509338 MiB name: fsdev_io_59937 00:04:34.779 size: 21.763794 MiB name: PDU_Pool 00:04:34.779 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:34.779 size: 4.133484 MiB name: evtpool_59937 00:04:34.779 size: 0.026123 MiB name: Session_Pool 00:04:34.779 end mempools------- 00:04:34.779 6 memzones totaling size 4.142822 MiB 00:04:34.779 size: 1.000366 MiB name: RG_ring_0_59937 00:04:34.779 size: 1.000366 MiB name: RG_ring_1_59937 00:04:34.779 size: 1.000366 MiB name: RG_ring_4_59937 00:04:34.779 size: 1.000366 MiB name: RG_ring_5_59937 00:04:34.779 size: 0.125366 MiB name: RG_ring_2_59937 00:04:34.779 size: 0.015991 MiB name: RG_ring_3_59937 00:04:34.779 end memzones------- 00:04:34.779 20:15:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:34.779 heap id: 0 total size: 824.000000 MiB number of busy elements: 327 number of free elements: 18 00:04:34.779 list of free elements. size: 16.778442 MiB 00:04:34.779 element at address: 0x200006400000 with size: 1.995972 MiB 00:04:34.779 element at address: 0x20000a600000 with size: 1.995972 MiB 00:04:34.779 element at address: 0x200003e00000 with size: 1.991028 MiB 00:04:34.779 element at address: 0x200019500040 with size: 0.999939 MiB 00:04:34.779 element at address: 0x200019900040 with size: 0.999939 MiB 00:04:34.779 element at address: 0x200019a00000 with size: 0.999084 MiB 00:04:34.779 element at address: 0x200032600000 with size: 0.994324 MiB 00:04:34.779 element at address: 0x200000400000 with size: 0.992004 MiB 00:04:34.779 element at address: 0x200019200000 with size: 0.959656 MiB 00:04:34.779 element at address: 0x200019d00040 with size: 0.936401 MiB 00:04:34.779 element at address: 0x200000200000 with size: 0.716980 MiB 00:04:34.779 element at address: 0x20001b400000 with size: 0.559753 MiB 00:04:34.779 element at address: 0x200000c00000 with size: 0.489197 MiB 00:04:34.779 element at address: 0x200019600000 with size: 0.487976 MiB 00:04:34.780 element at address: 0x200019e00000 with size: 0.485413 MiB 00:04:34.780 element at address: 0x200012c00000 with size: 0.433472 MiB 00:04:34.780 element at address: 0x200028800000 with size: 0.390442 MiB 00:04:34.780 element at address: 0x200000800000 with size: 0.350891 MiB 00:04:34.780 list of standard malloc elements. size: 199.290649 MiB 00:04:34.780 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:04:34.780 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:04:34.780 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:04:34.780 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:04:34.780 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:04:34.780 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:04:34.780 element at address: 0x200019deff40 with size: 0.062683 MiB 00:04:34.780 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:04:34.780 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:04:34.780 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:04:34.780 element at address: 0x200012bff040 with size: 0.000305 MiB 00:04:34.780 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:04:34.780 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:04:34.780 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:04:34.780 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:04:34.780 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:04:34.780 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:04:34.780 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:04:34.780 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:04:34.780 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:04:34.780 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:04:34.780 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:04:34.780 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:04:34.780 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:04:34.780 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:04:34.780 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:04:34.780 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:04:34.780 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:04:34.780 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:04:34.780 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:04:34.780 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:04:34.780 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:04:34.780 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:04:34.780 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:04:34.780 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:04:34.780 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:04:34.780 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:04:34.780 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:04:34.780 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:04:34.780 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:04:34.780 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:04:34.780 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:04:34.780 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:04:34.780 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:04:34.780 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:04:34.780 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:04:34.780 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:04:34.780 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:04:34.780 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:04:34.780 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:04:34.780 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:04:34.780 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:04:34.780 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:04:34.780 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:04:34.780 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:04:34.780 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:04:34.780 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:04:34.780 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:04:34.780 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:04:34.780 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:04:34.780 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:04:34.780 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:04:34.780 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:04:34.780 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:04:34.780 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:04:34.780 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:04:34.780 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:04:34.780 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:04:34.780 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:04:34.780 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:04:34.780 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:04:34.780 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:04:34.780 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:04:34.780 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:04:34.780 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:04:34.780 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:04:34.780 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:04:34.780 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:04:34.780 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:04:34.780 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:04:34.780 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:04:34.780 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:04:34.780 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:04:34.780 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:04:34.780 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:04:34.780 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:04:34.780 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:04:34.780 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:04:34.780 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:04:34.780 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:04:34.780 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:04:34.780 element at address: 0x200000cff000 with size: 0.000244 MiB 00:04:34.780 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:04:34.780 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:04:34.780 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:04:34.780 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:04:34.780 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:04:34.780 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:04:34.780 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:04:34.780 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:04:34.780 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:04:34.780 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:04:34.780 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:04:34.780 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:04:34.780 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:04:34.780 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:04:34.780 element at address: 0x200012bff180 with size: 0.000244 MiB 00:04:34.780 element at address: 0x200012bff280 with size: 0.000244 MiB 00:04:34.780 element at address: 0x200012bff380 with size: 0.000244 MiB 00:04:34.780 element at address: 0x200012bff480 with size: 0.000244 MiB 00:04:34.780 element at address: 0x200012bff580 with size: 0.000244 MiB 00:04:34.780 element at address: 0x200012bff680 with size: 0.000244 MiB 00:04:34.780 element at address: 0x200012bff780 with size: 0.000244 MiB 00:04:34.780 element at address: 0x200012bff880 with size: 0.000244 MiB 00:04:34.780 element at address: 0x200012bff980 with size: 0.000244 MiB 00:04:34.780 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:04:34.780 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:04:34.780 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:04:34.780 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:04:34.780 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:04:34.780 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:04:34.780 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:04:34.780 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:04:34.780 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:04:34.780 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:04:34.780 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:04:34.780 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:04:34.780 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:04:34.780 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:04:34.780 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:04:34.780 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:04:34.780 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:04:34.780 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:04:34.780 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:04:34.780 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:04:34.780 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:04:34.780 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:04:34.780 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:04:34.780 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:04:34.780 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:04:34.780 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:04:34.780 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:04:34.780 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:04:34.780 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:04:34.780 element at address: 0x200019affc40 with size: 0.000244 MiB 00:04:34.780 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:04:34.780 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:04:34.780 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:04:34.780 element at address: 0x20001b48f4c0 with size: 0.000244 MiB 00:04:34.780 element at address: 0x20001b48f5c0 with size: 0.000244 MiB 00:04:34.780 element at address: 0x20001b48f6c0 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20001b48f7c0 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20001b48f8c0 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20001b48f9c0 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20001b48fac0 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20001b48fbc0 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20001b48fcc0 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20001b48fdc0 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20001b48fec0 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20001b48ffc0 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20001b4900c0 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:04:34.781 element at address: 0x200028863f40 with size: 0.000244 MiB 00:04:34.781 element at address: 0x200028864040 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20002886af80 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20002886b080 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20002886b180 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20002886b280 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20002886b380 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20002886b480 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20002886b580 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20002886b680 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20002886b780 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20002886b880 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20002886b980 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20002886be80 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20002886c080 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20002886c180 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20002886c280 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20002886c380 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20002886c480 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20002886c580 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20002886c680 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20002886c780 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20002886c880 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20002886c980 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20002886d080 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20002886d180 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20002886d280 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20002886d380 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20002886d480 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20002886d580 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20002886d680 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20002886d780 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20002886d880 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20002886d980 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20002886da80 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20002886db80 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20002886de80 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20002886df80 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20002886e080 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20002886e180 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20002886e280 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20002886e380 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20002886e480 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20002886e580 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20002886e680 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20002886e780 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20002886e880 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20002886e980 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:04:34.781 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:04:34.782 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:04:34.782 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:04:34.782 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:04:34.782 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:04:34.782 element at address: 0x20002886f080 with size: 0.000244 MiB 00:04:34.782 element at address: 0x20002886f180 with size: 0.000244 MiB 00:04:34.782 element at address: 0x20002886f280 with size: 0.000244 MiB 00:04:34.782 element at address: 0x20002886f380 with size: 0.000244 MiB 00:04:34.782 element at address: 0x20002886f480 with size: 0.000244 MiB 00:04:34.782 element at address: 0x20002886f580 with size: 0.000244 MiB 00:04:34.782 element at address: 0x20002886f680 with size: 0.000244 MiB 00:04:34.782 element at address: 0x20002886f780 with size: 0.000244 MiB 00:04:34.782 element at address: 0x20002886f880 with size: 0.000244 MiB 00:04:34.782 element at address: 0x20002886f980 with size: 0.000244 MiB 00:04:34.782 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:04:34.782 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:04:34.782 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:04:34.782 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:04:34.782 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:04:34.782 list of memzone associated elements. size: 607.930908 MiB 00:04:34.782 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:04:34.782 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:34.782 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:04:34.782 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:34.782 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:04:34.782 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_59937_0 00:04:34.782 element at address: 0x200000dff340 with size: 48.003113 MiB 00:04:34.782 associated memzone info: size: 48.002930 MiB name: MP_msgpool_59937_0 00:04:34.782 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:04:34.782 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_59937_0 00:04:34.782 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:04:34.782 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:34.782 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:04:34.782 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:34.782 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:04:34.782 associated memzone info: size: 3.000122 MiB name: MP_evtpool_59937_0 00:04:34.782 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:04:34.782 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_59937 00:04:34.782 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:04:34.782 associated memzone info: size: 1.007996 MiB name: MP_evtpool_59937 00:04:34.782 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:04:34.782 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:34.782 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:04:34.782 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:34.782 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:04:34.782 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:34.782 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:04:34.782 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:34.782 element at address: 0x200000cff100 with size: 1.000549 MiB 00:04:34.782 associated memzone info: size: 1.000366 MiB name: RG_ring_0_59937 00:04:34.782 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:04:34.782 associated memzone info: size: 1.000366 MiB name: RG_ring_1_59937 00:04:34.782 element at address: 0x200019affd40 with size: 1.000549 MiB 00:04:34.782 associated memzone info: size: 1.000366 MiB name: RG_ring_4_59937 00:04:34.782 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:04:34.782 associated memzone info: size: 1.000366 MiB name: RG_ring_5_59937 00:04:34.782 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:04:34.782 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_59937 00:04:34.782 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:04:34.782 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_59937 00:04:34.782 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:04:34.782 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:34.782 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:04:34.782 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:34.782 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:04:34.782 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:34.782 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:04:34.782 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_59937 00:04:34.782 element at address: 0x20000085df80 with size: 0.125549 MiB 00:04:34.782 associated memzone info: size: 0.125366 MiB name: RG_ring_2_59937 00:04:34.782 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:04:34.782 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:34.782 element at address: 0x200028864140 with size: 0.023804 MiB 00:04:34.782 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:34.782 element at address: 0x200000859d40 with size: 0.016174 MiB 00:04:34.782 associated memzone info: size: 0.015991 MiB name: RG_ring_3_59937 00:04:34.782 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:04:34.782 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:34.782 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:04:34.782 associated memzone info: size: 0.000183 MiB name: MP_msgpool_59937 00:04:34.782 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:04:34.782 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_59937 00:04:34.782 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:04:34.782 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_59937 00:04:34.782 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:04:34.782 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:34.782 20:15:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:34.782 20:15:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59937 00:04:34.782 20:15:18 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 59937 ']' 00:04:34.782 20:15:18 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 59937 00:04:34.782 20:15:18 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:34.782 20:15:18 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:34.782 20:15:18 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59937 00:04:34.782 20:15:18 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:34.782 20:15:18 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:34.782 killing process with pid 59937 00:04:34.782 20:15:18 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59937' 00:04:34.782 20:15:18 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 59937 00:04:34.782 20:15:18 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 59937 00:04:36.168 00:04:36.168 real 0m2.910s 00:04:36.168 user 0m2.717s 00:04:36.168 sys 0m0.614s 00:04:36.168 ************************************ 00:04:36.168 END TEST dpdk_mem_utility 00:04:36.168 ************************************ 00:04:36.168 20:15:20 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:36.168 20:15:20 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:36.168 20:15:20 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:36.168 20:15:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:36.168 20:15:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:36.168 20:15:20 -- common/autotest_common.sh@10 -- # set +x 00:04:36.168 ************************************ 00:04:36.168 START TEST event 00:04:36.168 ************************************ 00:04:36.168 20:15:20 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:36.168 * Looking for test storage... 00:04:36.168 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:36.168 20:15:20 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:36.168 20:15:20 event -- common/autotest_common.sh@1711 -- # lcov --version 00:04:36.168 20:15:20 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:36.429 20:15:20 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:36.429 20:15:20 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:36.429 20:15:20 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:36.429 20:15:20 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:36.429 20:15:20 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:36.429 20:15:20 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:36.429 20:15:20 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:36.429 20:15:20 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:36.429 20:15:20 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:36.429 20:15:20 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:36.429 20:15:20 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:36.429 20:15:20 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:36.429 20:15:20 event -- scripts/common.sh@344 -- # case "$op" in 00:04:36.429 20:15:20 event -- scripts/common.sh@345 -- # : 1 00:04:36.429 20:15:20 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:36.429 20:15:20 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:36.429 20:15:20 event -- scripts/common.sh@365 -- # decimal 1 00:04:36.429 20:15:20 event -- scripts/common.sh@353 -- # local d=1 00:04:36.429 20:15:20 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:36.429 20:15:20 event -- scripts/common.sh@355 -- # echo 1 00:04:36.429 20:15:20 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:36.429 20:15:20 event -- scripts/common.sh@366 -- # decimal 2 00:04:36.429 20:15:20 event -- scripts/common.sh@353 -- # local d=2 00:04:36.429 20:15:20 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:36.429 20:15:20 event -- scripts/common.sh@355 -- # echo 2 00:04:36.429 20:15:20 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:36.429 20:15:20 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:36.429 20:15:20 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:36.429 20:15:20 event -- scripts/common.sh@368 -- # return 0 00:04:36.429 20:15:20 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:36.429 20:15:20 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:36.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.429 --rc genhtml_branch_coverage=1 00:04:36.429 --rc genhtml_function_coverage=1 00:04:36.429 --rc genhtml_legend=1 00:04:36.429 --rc geninfo_all_blocks=1 00:04:36.429 --rc geninfo_unexecuted_blocks=1 00:04:36.429 00:04:36.429 ' 00:04:36.429 20:15:20 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:36.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.429 --rc genhtml_branch_coverage=1 00:04:36.429 --rc genhtml_function_coverage=1 00:04:36.429 --rc genhtml_legend=1 00:04:36.429 --rc geninfo_all_blocks=1 00:04:36.429 --rc geninfo_unexecuted_blocks=1 00:04:36.429 00:04:36.429 ' 00:04:36.429 20:15:20 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:36.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.429 --rc genhtml_branch_coverage=1 00:04:36.429 --rc genhtml_function_coverage=1 00:04:36.429 --rc genhtml_legend=1 00:04:36.429 --rc geninfo_all_blocks=1 00:04:36.429 --rc geninfo_unexecuted_blocks=1 00:04:36.429 00:04:36.429 ' 00:04:36.429 20:15:20 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:36.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.429 --rc genhtml_branch_coverage=1 00:04:36.429 --rc genhtml_function_coverage=1 00:04:36.429 --rc genhtml_legend=1 00:04:36.429 --rc geninfo_all_blocks=1 00:04:36.429 --rc geninfo_unexecuted_blocks=1 00:04:36.429 00:04:36.429 ' 00:04:36.429 20:15:20 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:36.429 20:15:20 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:36.429 20:15:20 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:36.429 20:15:20 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:36.429 20:15:20 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:36.429 20:15:20 event -- common/autotest_common.sh@10 -- # set +x 00:04:36.429 ************************************ 00:04:36.429 START TEST event_perf 00:04:36.429 ************************************ 00:04:36.429 20:15:20 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:36.429 Running I/O for 1 seconds...[2024-12-12 20:15:20.480574] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:04:36.429 [2024-12-12 20:15:20.480683] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60034 ] 00:04:36.429 [2024-12-12 20:15:20.641374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:36.690 [2024-12-12 20:15:20.760507] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:04:36.690 [2024-12-12 20:15:20.760776] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:04:36.690 Running I/O for 1 seconds...[2024-12-12 20:15:20.761159] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:04:36.690 [2024-12-12 20:15:20.761356] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.078 00:04:38.078 lcore 0: 143364 00:04:38.078 lcore 1: 143367 00:04:38.078 lcore 2: 143367 00:04:38.078 lcore 3: 143367 00:04:38.078 done. 00:04:38.078 00:04:38.078 real 0m1.439s 00:04:38.078 user 0m4.237s 00:04:38.078 sys 0m0.081s 00:04:38.078 20:15:21 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:38.078 20:15:21 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:38.078 ************************************ 00:04:38.078 END TEST event_perf 00:04:38.078 ************************************ 00:04:38.078 20:15:21 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:38.078 20:15:21 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:38.078 20:15:21 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:38.078 20:15:21 event -- common/autotest_common.sh@10 -- # set +x 00:04:38.078 ************************************ 00:04:38.078 START TEST event_reactor 00:04:38.078 ************************************ 00:04:38.078 20:15:21 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:38.078 [2024-12-12 20:15:21.959913] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:04:38.078 [2024-12-12 20:15:21.960023] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60073 ] 00:04:38.078 [2024-12-12 20:15:22.119986] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.078 [2024-12-12 20:15:22.249091] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.461 test_start 00:04:39.461 oneshot 00:04:39.461 tick 100 00:04:39.461 tick 100 00:04:39.461 tick 250 00:04:39.461 tick 100 00:04:39.461 tick 100 00:04:39.461 tick 100 00:04:39.461 tick 250 00:04:39.461 tick 500 00:04:39.461 tick 100 00:04:39.461 tick 100 00:04:39.461 tick 250 00:04:39.461 tick 100 00:04:39.461 tick 100 00:04:39.461 test_end 00:04:39.461 00:04:39.461 real 0m1.471s 00:04:39.461 user 0m1.293s 00:04:39.461 sys 0m0.069s 00:04:39.461 20:15:23 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:39.461 20:15:23 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:39.461 ************************************ 00:04:39.461 END TEST event_reactor 00:04:39.461 ************************************ 00:04:39.461 20:15:23 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:39.461 20:15:23 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:39.461 20:15:23 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:39.461 20:15:23 event -- common/autotest_common.sh@10 -- # set +x 00:04:39.461 ************************************ 00:04:39.461 START TEST event_reactor_perf 00:04:39.461 ************************************ 00:04:39.461 20:15:23 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:39.461 [2024-12-12 20:15:23.471521] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:04:39.461 [2024-12-12 20:15:23.471630] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60110 ] 00:04:39.461 [2024-12-12 20:15:23.632673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.722 [2024-12-12 20:15:23.730901] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.664 test_start 00:04:40.664 test_end 00:04:40.664 Performance: 314777 events per second 00:04:40.664 00:04:40.664 real 0m1.442s 00:04:40.664 user 0m1.272s 00:04:40.664 sys 0m0.061s 00:04:40.664 20:15:24 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:40.664 20:15:24 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:40.664 ************************************ 00:04:40.664 END TEST event_reactor_perf 00:04:40.664 ************************************ 00:04:40.925 20:15:24 event -- event/event.sh@49 -- # uname -s 00:04:40.925 20:15:24 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:40.925 20:15:24 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:40.925 20:15:24 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:40.925 20:15:24 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:40.925 20:15:24 event -- common/autotest_common.sh@10 -- # set +x 00:04:40.925 ************************************ 00:04:40.925 START TEST event_scheduler 00:04:40.925 ************************************ 00:04:40.925 20:15:24 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:40.925 * Looking for test storage... 00:04:40.925 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:04:40.925 20:15:24 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:40.925 20:15:24 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:04:40.925 20:15:24 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:40.925 20:15:25 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:40.925 20:15:25 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:40.925 20:15:25 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:40.925 20:15:25 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:40.925 20:15:25 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:40.925 20:15:25 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:40.925 20:15:25 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:40.925 20:15:25 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:40.925 20:15:25 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:40.926 20:15:25 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:40.926 20:15:25 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:40.926 20:15:25 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:40.926 20:15:25 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:40.926 20:15:25 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:40.926 20:15:25 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:40.926 20:15:25 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:40.926 20:15:25 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:40.926 20:15:25 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:40.926 20:15:25 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:40.926 20:15:25 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:40.926 20:15:25 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:40.926 20:15:25 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:40.926 20:15:25 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:40.926 20:15:25 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:40.926 20:15:25 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:40.926 20:15:25 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:40.926 20:15:25 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:40.926 20:15:25 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:40.926 20:15:25 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:40.926 20:15:25 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:40.926 20:15:25 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:40.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.926 --rc genhtml_branch_coverage=1 00:04:40.926 --rc genhtml_function_coverage=1 00:04:40.926 --rc genhtml_legend=1 00:04:40.926 --rc geninfo_all_blocks=1 00:04:40.926 --rc geninfo_unexecuted_blocks=1 00:04:40.926 00:04:40.926 ' 00:04:40.926 20:15:25 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:40.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.926 --rc genhtml_branch_coverage=1 00:04:40.926 --rc genhtml_function_coverage=1 00:04:40.926 --rc genhtml_legend=1 00:04:40.926 --rc geninfo_all_blocks=1 00:04:40.926 --rc geninfo_unexecuted_blocks=1 00:04:40.926 00:04:40.926 ' 00:04:40.926 20:15:25 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:40.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.926 --rc genhtml_branch_coverage=1 00:04:40.926 --rc genhtml_function_coverage=1 00:04:40.926 --rc genhtml_legend=1 00:04:40.926 --rc geninfo_all_blocks=1 00:04:40.926 --rc geninfo_unexecuted_blocks=1 00:04:40.926 00:04:40.926 ' 00:04:40.926 20:15:25 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:40.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.926 --rc genhtml_branch_coverage=1 00:04:40.926 --rc genhtml_function_coverage=1 00:04:40.926 --rc genhtml_legend=1 00:04:40.926 --rc geninfo_all_blocks=1 00:04:40.926 --rc geninfo_unexecuted_blocks=1 00:04:40.926 00:04:40.926 ' 00:04:40.926 20:15:25 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:40.926 20:15:25 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=60180 00:04:40.926 20:15:25 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:40.926 20:15:25 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 60180 00:04:40.926 20:15:25 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 60180 ']' 00:04:40.926 20:15:25 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:40.926 20:15:25 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:40.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:40.926 20:15:25 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:40.926 20:15:25 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:40.926 20:15:25 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:40.926 20:15:25 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:40.926 [2024-12-12 20:15:25.117632] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:04:40.926 [2024-12-12 20:15:25.117754] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60180 ] 00:04:41.184 [2024-12-12 20:15:25.274892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:41.184 [2024-12-12 20:15:25.377784] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.184 [2024-12-12 20:15:25.378090] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:04:41.184 [2024-12-12 20:15:25.378265] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:04:41.184 [2024-12-12 20:15:25.378295] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:04:41.749 20:15:25 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:41.749 20:15:25 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:04:41.750 20:15:25 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:41.750 20:15:25 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:41.750 20:15:25 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:41.750 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:41.750 POWER: Cannot set governor of lcore 0 to userspace 00:04:41.750 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:41.750 POWER: Cannot set governor of lcore 0 to performance 00:04:41.750 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:41.750 POWER: Cannot set governor of lcore 0 to userspace 00:04:41.750 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:41.750 POWER: Cannot set governor of lcore 0 to userspace 00:04:41.750 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:04:41.750 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:04:41.750 POWER: Unable to set Power Management Environment for lcore 0 00:04:41.750 [2024-12-12 20:15:25.915508] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:04:41.750 [2024-12-12 20:15:25.915528] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:04:41.750 [2024-12-12 20:15:25.915537] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:41.750 [2024-12-12 20:15:25.915552] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:41.750 [2024-12-12 20:15:25.915560] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:41.750 [2024-12-12 20:15:25.915568] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:41.750 20:15:25 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:41.750 20:15:25 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:41.750 20:15:25 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:41.750 20:15:25 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:42.008 [2024-12-12 20:15:26.142844] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:42.008 20:15:26 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:42.008 20:15:26 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:42.008 20:15:26 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:42.008 20:15:26 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:42.008 20:15:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:42.008 ************************************ 00:04:42.008 START TEST scheduler_create_thread 00:04:42.008 ************************************ 00:04:42.008 20:15:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:04:42.008 20:15:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:42.008 20:15:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.008 20:15:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:42.008 2 00:04:42.008 20:15:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:42.008 20:15:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:42.008 20:15:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.008 20:15:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:42.008 3 00:04:42.008 20:15:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:42.008 20:15:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:42.008 20:15:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.008 20:15:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:42.008 4 00:04:42.008 20:15:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:42.008 20:15:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:42.009 20:15:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.009 20:15:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:42.009 5 00:04:42.009 20:15:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:42.009 20:15:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:42.009 20:15:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.009 20:15:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:42.009 6 00:04:42.009 20:15:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:42.009 20:15:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:42.009 20:15:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.009 20:15:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:42.009 7 00:04:42.009 20:15:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:42.009 20:15:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:42.009 20:15:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.009 20:15:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:42.009 8 00:04:42.009 20:15:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:42.009 20:15:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:42.009 20:15:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.009 20:15:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:42.009 9 00:04:42.009 20:15:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:42.009 20:15:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:42.009 20:15:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.009 20:15:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:42.009 10 00:04:42.009 20:15:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:42.009 20:15:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:42.009 20:15:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.009 20:15:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:42.267 20:15:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:42.267 20:15:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:42.267 20:15:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:42.267 20:15:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.267 20:15:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:42.267 20:15:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:42.267 20:15:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:42.267 20:15:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.267 20:15:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:42.833 20:15:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:42.833 20:15:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:42.833 20:15:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:42.833 20:15:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.833 20:15:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:43.764 ************************************ 00:04:43.764 END TEST scheduler_create_thread 00:04:43.764 ************************************ 00:04:43.764 20:15:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.764 00:04:43.764 real 0m1.753s 00:04:43.764 user 0m0.015s 00:04:43.764 sys 0m0.005s 00:04:43.764 20:15:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:43.764 20:15:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:43.764 20:15:27 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:43.764 20:15:27 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 60180 00:04:43.764 20:15:27 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 60180 ']' 00:04:43.764 20:15:27 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 60180 00:04:43.764 20:15:27 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:04:43.764 20:15:27 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:43.764 20:15:27 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60180 00:04:43.764 killing process with pid 60180 00:04:43.764 20:15:27 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:43.764 20:15:27 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:43.764 20:15:27 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60180' 00:04:43.764 20:15:27 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 60180 00:04:43.764 20:15:27 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 60180 00:04:44.407 [2024-12-12 20:15:28.383803] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:44.972 00:04:44.972 real 0m4.035s 00:04:44.972 user 0m6.582s 00:04:44.972 sys 0m0.319s 00:04:44.972 20:15:28 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:44.972 20:15:28 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:44.972 ************************************ 00:04:44.972 END TEST event_scheduler 00:04:44.972 ************************************ 00:04:44.972 20:15:28 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:44.972 20:15:28 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:44.972 20:15:28 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:44.972 20:15:28 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:44.972 20:15:28 event -- common/autotest_common.sh@10 -- # set +x 00:04:44.972 ************************************ 00:04:44.972 START TEST app_repeat 00:04:44.972 ************************************ 00:04:44.972 20:15:29 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:04:44.972 20:15:29 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:44.972 20:15:29 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:44.972 20:15:29 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:44.972 20:15:29 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:44.972 20:15:29 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:44.972 20:15:29 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:44.972 20:15:29 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:44.972 20:15:29 event.app_repeat -- event/event.sh@19 -- # repeat_pid=60274 00:04:44.972 20:15:29 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:44.972 20:15:29 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 60274' 00:04:44.972 Process app_repeat pid: 60274 00:04:44.972 20:15:29 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:44.972 spdk_app_start Round 0 00:04:44.972 20:15:29 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:44.972 20:15:29 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60274 /var/tmp/spdk-nbd.sock 00:04:44.972 20:15:29 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:44.972 20:15:29 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60274 ']' 00:04:44.972 20:15:29 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:44.972 20:15:29 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:44.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:44.972 20:15:29 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:44.972 20:15:29 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:44.972 20:15:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:44.972 [2024-12-12 20:15:29.044134] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:04:44.972 [2024-12-12 20:15:29.044247] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60274 ] 00:04:45.231 [2024-12-12 20:15:29.202961] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:45.231 [2024-12-12 20:15:29.297599] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:04:45.231 [2024-12-12 20:15:29.297678] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.797 20:15:29 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:45.797 20:15:29 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:45.797 20:15:29 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:46.056 Malloc0 00:04:46.056 20:15:30 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:46.313 Malloc1 00:04:46.313 20:15:30 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:46.313 20:15:30 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:46.313 20:15:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:46.313 20:15:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:46.313 20:15:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:46.313 20:15:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:46.313 20:15:30 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:46.313 20:15:30 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:46.313 20:15:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:46.313 20:15:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:46.313 20:15:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:46.313 20:15:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:46.313 20:15:30 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:46.313 20:15:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:46.313 20:15:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:46.313 20:15:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:46.571 /dev/nbd0 00:04:46.571 20:15:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:46.571 20:15:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:46.571 20:15:30 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:46.571 20:15:30 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:46.571 20:15:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:46.571 20:15:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:46.571 20:15:30 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:46.571 20:15:30 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:46.571 20:15:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:46.571 20:15:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:46.571 20:15:30 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:46.571 1+0 records in 00:04:46.571 1+0 records out 00:04:46.571 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000329881 s, 12.4 MB/s 00:04:46.571 20:15:30 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:46.571 20:15:30 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:46.571 20:15:30 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:46.571 20:15:30 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:46.571 20:15:30 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:46.571 20:15:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:46.571 20:15:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:46.571 20:15:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:46.829 /dev/nbd1 00:04:46.829 20:15:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:46.829 20:15:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:46.829 20:15:30 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:46.829 20:15:30 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:46.829 20:15:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:46.829 20:15:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:46.829 20:15:30 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:46.829 20:15:30 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:46.829 20:15:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:46.829 20:15:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:46.829 20:15:30 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:46.829 1+0 records in 00:04:46.829 1+0 records out 00:04:46.829 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000249321 s, 16.4 MB/s 00:04:46.829 20:15:30 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:46.829 20:15:30 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:46.829 20:15:30 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:46.829 20:15:30 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:46.829 20:15:30 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:46.829 20:15:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:46.829 20:15:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:46.829 20:15:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:46.829 20:15:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:46.829 20:15:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:47.088 20:15:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:47.088 { 00:04:47.088 "nbd_device": "/dev/nbd0", 00:04:47.088 "bdev_name": "Malloc0" 00:04:47.088 }, 00:04:47.088 { 00:04:47.088 "nbd_device": "/dev/nbd1", 00:04:47.088 "bdev_name": "Malloc1" 00:04:47.088 } 00:04:47.088 ]' 00:04:47.088 20:15:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:47.088 { 00:04:47.088 "nbd_device": "/dev/nbd0", 00:04:47.088 "bdev_name": "Malloc0" 00:04:47.088 }, 00:04:47.088 { 00:04:47.088 "nbd_device": "/dev/nbd1", 00:04:47.088 "bdev_name": "Malloc1" 00:04:47.088 } 00:04:47.088 ]' 00:04:47.088 20:15:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:47.088 20:15:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:47.088 /dev/nbd1' 00:04:47.088 20:15:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:47.088 /dev/nbd1' 00:04:47.088 20:15:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:47.088 20:15:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:47.088 20:15:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:47.088 20:15:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:47.088 20:15:31 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:47.088 20:15:31 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:47.088 20:15:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:47.088 20:15:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:47.088 20:15:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:47.088 20:15:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:47.088 20:15:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:47.088 20:15:31 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:47.088 256+0 records in 00:04:47.088 256+0 records out 00:04:47.088 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00654409 s, 160 MB/s 00:04:47.088 20:15:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:47.088 20:15:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:47.088 256+0 records in 00:04:47.088 256+0 records out 00:04:47.088 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0190262 s, 55.1 MB/s 00:04:47.088 20:15:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:47.088 20:15:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:47.088 256+0 records in 00:04:47.088 256+0 records out 00:04:47.088 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0194147 s, 54.0 MB/s 00:04:47.088 20:15:31 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:47.088 20:15:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:47.088 20:15:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:47.088 20:15:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:47.088 20:15:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:47.088 20:15:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:47.088 20:15:31 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:47.088 20:15:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:47.088 20:15:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:47.088 20:15:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:47.088 20:15:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:47.088 20:15:31 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:47.088 20:15:31 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:47.088 20:15:31 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:47.088 20:15:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:47.088 20:15:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:47.088 20:15:31 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:47.088 20:15:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:47.088 20:15:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:47.346 20:15:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:47.346 20:15:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:47.346 20:15:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:47.346 20:15:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:47.346 20:15:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:47.346 20:15:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:47.346 20:15:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:47.346 20:15:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:47.346 20:15:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:47.346 20:15:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:47.605 20:15:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:47.605 20:15:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:47.605 20:15:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:47.605 20:15:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:47.605 20:15:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:47.605 20:15:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:47.605 20:15:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:47.605 20:15:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:47.605 20:15:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:47.605 20:15:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:47.605 20:15:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:47.605 20:15:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:47.605 20:15:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:47.605 20:15:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:47.605 20:15:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:47.605 20:15:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:47.605 20:15:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:47.605 20:15:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:47.863 20:15:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:47.863 20:15:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:47.863 20:15:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:47.863 20:15:31 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:47.863 20:15:31 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:47.863 20:15:31 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:48.121 20:15:32 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:48.687 [2024-12-12 20:15:32.698156] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:48.687 [2024-12-12 20:15:32.771789] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:04:48.687 [2024-12-12 20:15:32.771987] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.687 [2024-12-12 20:15:32.867950] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:48.687 [2024-12-12 20:15:32.868012] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:51.223 spdk_app_start Round 1 00:04:51.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:51.223 20:15:35 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:51.223 20:15:35 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:51.223 20:15:35 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60274 /var/tmp/spdk-nbd.sock 00:04:51.223 20:15:35 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60274 ']' 00:04:51.223 20:15:35 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:51.223 20:15:35 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:51.223 20:15:35 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:51.223 20:15:35 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:51.223 20:15:35 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:51.223 20:15:35 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:51.223 20:15:35 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:51.223 20:15:35 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:51.481 Malloc0 00:04:51.481 20:15:35 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:51.740 Malloc1 00:04:51.740 20:15:35 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:51.740 20:15:35 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:51.740 20:15:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:51.740 20:15:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:51.740 20:15:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:51.740 20:15:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:51.740 20:15:35 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:51.740 20:15:35 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:51.740 20:15:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:51.740 20:15:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:51.740 20:15:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:51.740 20:15:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:51.740 20:15:35 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:51.740 20:15:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:51.740 20:15:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:51.740 20:15:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:51.999 /dev/nbd0 00:04:51.999 20:15:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:51.999 20:15:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:51.999 20:15:36 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:51.999 20:15:36 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:51.999 20:15:36 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:51.999 20:15:36 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:51.999 20:15:36 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:51.999 20:15:36 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:51.999 20:15:36 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:51.999 20:15:36 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:51.999 20:15:36 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:51.999 1+0 records in 00:04:51.999 1+0 records out 00:04:51.999 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000330909 s, 12.4 MB/s 00:04:51.999 20:15:36 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:51.999 20:15:36 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:51.999 20:15:36 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:51.999 20:15:36 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:51.999 20:15:36 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:51.999 20:15:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:51.999 20:15:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:51.999 20:15:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:52.258 /dev/nbd1 00:04:52.258 20:15:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:52.258 20:15:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:52.258 20:15:36 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:52.258 20:15:36 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:52.258 20:15:36 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:52.258 20:15:36 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:52.258 20:15:36 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:52.258 20:15:36 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:52.258 20:15:36 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:52.258 20:15:36 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:52.258 20:15:36 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:52.258 1+0 records in 00:04:52.258 1+0 records out 00:04:52.258 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000198017 s, 20.7 MB/s 00:04:52.258 20:15:36 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:52.258 20:15:36 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:52.258 20:15:36 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:52.258 20:15:36 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:52.258 20:15:36 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:52.258 20:15:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:52.258 20:15:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:52.258 20:15:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:52.258 20:15:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:52.258 20:15:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:52.517 20:15:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:52.517 { 00:04:52.517 "nbd_device": "/dev/nbd0", 00:04:52.517 "bdev_name": "Malloc0" 00:04:52.517 }, 00:04:52.517 { 00:04:52.517 "nbd_device": "/dev/nbd1", 00:04:52.517 "bdev_name": "Malloc1" 00:04:52.517 } 00:04:52.517 ]' 00:04:52.517 20:15:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:52.517 { 00:04:52.517 "nbd_device": "/dev/nbd0", 00:04:52.517 "bdev_name": "Malloc0" 00:04:52.517 }, 00:04:52.517 { 00:04:52.517 "nbd_device": "/dev/nbd1", 00:04:52.517 "bdev_name": "Malloc1" 00:04:52.517 } 00:04:52.517 ]' 00:04:52.517 20:15:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:52.517 20:15:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:52.517 /dev/nbd1' 00:04:52.517 20:15:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:52.517 /dev/nbd1' 00:04:52.517 20:15:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:52.517 20:15:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:52.517 20:15:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:52.517 20:15:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:52.517 20:15:36 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:52.517 20:15:36 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:52.517 20:15:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:52.517 20:15:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:52.517 20:15:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:52.517 20:15:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:52.517 20:15:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:52.517 20:15:36 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:52.517 256+0 records in 00:04:52.517 256+0 records out 00:04:52.517 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00506898 s, 207 MB/s 00:04:52.517 20:15:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:52.517 20:15:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:52.517 256+0 records in 00:04:52.517 256+0 records out 00:04:52.517 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0207038 s, 50.6 MB/s 00:04:52.517 20:15:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:52.517 20:15:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:52.517 256+0 records in 00:04:52.517 256+0 records out 00:04:52.517 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0153731 s, 68.2 MB/s 00:04:52.517 20:15:36 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:52.517 20:15:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:52.517 20:15:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:52.517 20:15:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:52.517 20:15:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:52.517 20:15:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:52.517 20:15:36 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:52.517 20:15:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:52.517 20:15:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:52.517 20:15:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:52.517 20:15:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:52.517 20:15:36 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:52.517 20:15:36 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:52.517 20:15:36 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:52.517 20:15:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:52.517 20:15:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:52.517 20:15:36 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:52.517 20:15:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:52.517 20:15:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:52.775 20:15:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:52.775 20:15:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:52.775 20:15:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:52.775 20:15:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:52.775 20:15:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:52.775 20:15:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:52.775 20:15:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:52.775 20:15:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:52.775 20:15:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:52.775 20:15:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:53.033 20:15:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:53.034 20:15:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:53.034 20:15:37 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:53.034 20:15:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:53.034 20:15:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:53.034 20:15:37 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:53.034 20:15:37 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:53.034 20:15:37 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:53.034 20:15:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:53.034 20:15:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:53.034 20:15:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:53.034 20:15:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:53.034 20:15:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:53.034 20:15:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:53.034 20:15:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:53.292 20:15:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:53.292 20:15:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:53.292 20:15:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:53.292 20:15:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:53.292 20:15:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:53.292 20:15:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:53.292 20:15:37 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:53.292 20:15:37 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:53.292 20:15:37 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:53.549 20:15:37 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:54.116 [2024-12-12 20:15:38.111347] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:54.116 [2024-12-12 20:15:38.185401] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.116 [2024-12-12 20:15:38.185439] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:04:54.116 [2024-12-12 20:15:38.288798] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:54.116 [2024-12-12 20:15:38.288846] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:56.647 spdk_app_start Round 2 00:04:56.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:56.647 20:15:40 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:56.647 20:15:40 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:56.647 20:15:40 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60274 /var/tmp/spdk-nbd.sock 00:04:56.647 20:15:40 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60274 ']' 00:04:56.647 20:15:40 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:56.647 20:15:40 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:56.647 20:15:40 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:56.647 20:15:40 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:56.647 20:15:40 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:56.647 20:15:40 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:56.647 20:15:40 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:56.647 20:15:40 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:56.956 Malloc0 00:04:56.956 20:15:40 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:57.214 Malloc1 00:04:57.214 20:15:41 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:57.214 20:15:41 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:57.214 20:15:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:57.214 20:15:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:57.214 20:15:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:57.214 20:15:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:57.214 20:15:41 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:57.214 20:15:41 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:57.214 20:15:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:57.214 20:15:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:57.214 20:15:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:57.214 20:15:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:57.214 20:15:41 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:57.214 20:15:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:57.214 20:15:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:57.214 20:15:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:57.214 /dev/nbd0 00:04:57.472 20:15:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:57.472 20:15:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:57.472 20:15:41 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:57.472 20:15:41 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:57.472 20:15:41 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:57.472 20:15:41 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:57.472 20:15:41 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:57.472 20:15:41 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:57.472 20:15:41 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:57.472 20:15:41 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:57.472 20:15:41 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:57.472 1+0 records in 00:04:57.472 1+0 records out 00:04:57.472 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000288817 s, 14.2 MB/s 00:04:57.472 20:15:41 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:57.472 20:15:41 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:57.472 20:15:41 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:57.472 20:15:41 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:57.472 20:15:41 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:57.472 20:15:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:57.472 20:15:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:57.472 20:15:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:57.472 /dev/nbd1 00:04:57.472 20:15:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:57.472 20:15:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:57.472 20:15:41 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:57.472 20:15:41 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:57.472 20:15:41 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:57.472 20:15:41 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:57.472 20:15:41 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:57.472 20:15:41 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:57.472 20:15:41 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:57.472 20:15:41 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:57.472 20:15:41 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:57.472 1+0 records in 00:04:57.472 1+0 records out 00:04:57.472 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000287355 s, 14.3 MB/s 00:04:57.472 20:15:41 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:57.472 20:15:41 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:57.472 20:15:41 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:57.472 20:15:41 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:57.472 20:15:41 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:57.472 20:15:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:57.472 20:15:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:57.472 20:15:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:57.472 20:15:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:57.472 20:15:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:57.730 20:15:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:57.730 { 00:04:57.730 "nbd_device": "/dev/nbd0", 00:04:57.730 "bdev_name": "Malloc0" 00:04:57.730 }, 00:04:57.730 { 00:04:57.730 "nbd_device": "/dev/nbd1", 00:04:57.730 "bdev_name": "Malloc1" 00:04:57.730 } 00:04:57.730 ]' 00:04:57.730 20:15:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:57.730 { 00:04:57.730 "nbd_device": "/dev/nbd0", 00:04:57.730 "bdev_name": "Malloc0" 00:04:57.730 }, 00:04:57.730 { 00:04:57.730 "nbd_device": "/dev/nbd1", 00:04:57.730 "bdev_name": "Malloc1" 00:04:57.730 } 00:04:57.730 ]' 00:04:57.730 20:15:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:57.730 20:15:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:57.730 /dev/nbd1' 00:04:57.730 20:15:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:57.730 /dev/nbd1' 00:04:57.730 20:15:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:57.730 20:15:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:57.730 20:15:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:57.730 20:15:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:57.730 20:15:41 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:57.730 20:15:41 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:57.730 20:15:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:57.730 20:15:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:57.730 20:15:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:57.730 20:15:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:57.730 20:15:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:57.730 20:15:41 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:57.730 256+0 records in 00:04:57.730 256+0 records out 00:04:57.730 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00674643 s, 155 MB/s 00:04:57.730 20:15:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:57.730 20:15:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:57.730 256+0 records in 00:04:57.730 256+0 records out 00:04:57.730 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.016952 s, 61.9 MB/s 00:04:57.730 20:15:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:57.730 20:15:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:57.730 256+0 records in 00:04:57.730 256+0 records out 00:04:57.730 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0147401 s, 71.1 MB/s 00:04:57.730 20:15:41 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:57.730 20:15:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:57.730 20:15:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:57.730 20:15:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:57.730 20:15:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:57.730 20:15:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:57.730 20:15:41 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:57.730 20:15:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:57.730 20:15:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:57.730 20:15:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:57.730 20:15:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:57.730 20:15:41 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:57.730 20:15:41 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:57.730 20:15:41 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:57.730 20:15:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:57.730 20:15:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:57.730 20:15:41 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:57.730 20:15:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:57.730 20:15:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:57.989 20:15:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:57.989 20:15:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:57.989 20:15:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:57.989 20:15:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:57.989 20:15:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:57.989 20:15:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:57.989 20:15:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:57.989 20:15:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:57.989 20:15:42 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:57.989 20:15:42 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:58.248 20:15:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:58.248 20:15:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:58.248 20:15:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:58.248 20:15:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:58.248 20:15:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:58.248 20:15:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:58.248 20:15:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:58.248 20:15:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:58.248 20:15:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:58.248 20:15:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:58.248 20:15:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:58.507 20:15:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:58.507 20:15:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:58.507 20:15:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:58.507 20:15:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:58.507 20:15:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:58.507 20:15:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:58.507 20:15:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:58.507 20:15:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:58.507 20:15:42 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:58.507 20:15:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:58.507 20:15:42 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:58.507 20:15:42 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:58.507 20:15:42 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:58.765 20:15:42 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:59.331 [2024-12-12 20:15:43.385164] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:59.331 [2024-12-12 20:15:43.461499] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:04:59.331 [2024-12-12 20:15:43.461655] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.589 [2024-12-12 20:15:43.564037] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:59.589 [2024-12-12 20:15:43.564092] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:02.122 20:15:45 event.app_repeat -- event/event.sh@38 -- # waitforlisten 60274 /var/tmp/spdk-nbd.sock 00:05:02.122 20:15:45 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60274 ']' 00:05:02.122 20:15:45 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:02.122 20:15:45 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:02.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:02.122 20:15:45 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:02.122 20:15:45 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:02.122 20:15:45 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:02.122 20:15:46 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:02.122 20:15:46 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:02.122 20:15:46 event.app_repeat -- event/event.sh@39 -- # killprocess 60274 00:05:02.122 20:15:46 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 60274 ']' 00:05:02.122 20:15:46 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 60274 00:05:02.122 20:15:46 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:02.122 20:15:46 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:02.122 20:15:46 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60274 00:05:02.122 20:15:46 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:02.122 killing process with pid 60274 00:05:02.122 20:15:46 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:02.122 20:15:46 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60274' 00:05:02.122 20:15:46 event.app_repeat -- common/autotest_common.sh@973 -- # kill 60274 00:05:02.122 20:15:46 event.app_repeat -- common/autotest_common.sh@978 -- # wait 60274 00:05:02.381 spdk_app_start is called in Round 0. 00:05:02.381 Shutdown signal received, stop current app iteration 00:05:02.381 Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 reinitialization... 00:05:02.381 spdk_app_start is called in Round 1. 00:05:02.381 Shutdown signal received, stop current app iteration 00:05:02.381 Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 reinitialization... 00:05:02.381 spdk_app_start is called in Round 2. 00:05:02.381 Shutdown signal received, stop current app iteration 00:05:02.381 Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 reinitialization... 00:05:02.381 spdk_app_start is called in Round 3. 00:05:02.381 Shutdown signal received, stop current app iteration 00:05:02.381 ************************************ 00:05:02.381 END TEST app_repeat 00:05:02.381 ************************************ 00:05:02.381 20:15:46 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:02.381 20:15:46 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:02.381 00:05:02.381 real 0m17.578s 00:05:02.381 user 0m38.506s 00:05:02.381 sys 0m2.004s 00:05:02.381 20:15:46 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:02.381 20:15:46 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:02.381 20:15:46 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:02.381 20:15:46 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:02.381 20:15:46 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:02.381 20:15:46 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:02.381 20:15:46 event -- common/autotest_common.sh@10 -- # set +x 00:05:02.639 ************************************ 00:05:02.639 START TEST cpu_locks 00:05:02.639 ************************************ 00:05:02.639 20:15:46 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:02.640 * Looking for test storage... 00:05:02.640 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:02.640 20:15:46 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:02.640 20:15:46 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:02.640 20:15:46 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:05:02.640 20:15:46 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:02.640 20:15:46 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:02.640 20:15:46 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:02.640 20:15:46 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:02.640 20:15:46 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:02.640 20:15:46 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:02.640 20:15:46 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:02.640 20:15:46 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:02.640 20:15:46 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:02.640 20:15:46 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:02.640 20:15:46 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:02.640 20:15:46 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:02.640 20:15:46 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:02.640 20:15:46 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:02.640 20:15:46 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:02.640 20:15:46 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:02.640 20:15:46 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:02.640 20:15:46 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:02.640 20:15:46 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:02.640 20:15:46 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:02.640 20:15:46 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:02.640 20:15:46 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:02.640 20:15:46 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:02.640 20:15:46 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:02.640 20:15:46 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:02.640 20:15:46 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:02.640 20:15:46 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:02.640 20:15:46 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:02.640 20:15:46 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:02.640 20:15:46 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:02.640 20:15:46 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:02.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.640 --rc genhtml_branch_coverage=1 00:05:02.640 --rc genhtml_function_coverage=1 00:05:02.640 --rc genhtml_legend=1 00:05:02.640 --rc geninfo_all_blocks=1 00:05:02.640 --rc geninfo_unexecuted_blocks=1 00:05:02.640 00:05:02.640 ' 00:05:02.640 20:15:46 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:02.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.640 --rc genhtml_branch_coverage=1 00:05:02.640 --rc genhtml_function_coverage=1 00:05:02.640 --rc genhtml_legend=1 00:05:02.640 --rc geninfo_all_blocks=1 00:05:02.640 --rc geninfo_unexecuted_blocks=1 00:05:02.640 00:05:02.640 ' 00:05:02.640 20:15:46 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:02.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.640 --rc genhtml_branch_coverage=1 00:05:02.640 --rc genhtml_function_coverage=1 00:05:02.640 --rc genhtml_legend=1 00:05:02.640 --rc geninfo_all_blocks=1 00:05:02.640 --rc geninfo_unexecuted_blocks=1 00:05:02.640 00:05:02.640 ' 00:05:02.640 20:15:46 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:02.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.640 --rc genhtml_branch_coverage=1 00:05:02.640 --rc genhtml_function_coverage=1 00:05:02.640 --rc genhtml_legend=1 00:05:02.640 --rc geninfo_all_blocks=1 00:05:02.640 --rc geninfo_unexecuted_blocks=1 00:05:02.640 00:05:02.640 ' 00:05:02.640 20:15:46 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:02.640 20:15:46 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:02.640 20:15:46 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:02.640 20:15:46 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:02.640 20:15:46 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:02.640 20:15:46 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:02.640 20:15:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:02.640 ************************************ 00:05:02.640 START TEST default_locks 00:05:02.640 ************************************ 00:05:02.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:02.640 20:15:46 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:02.640 20:15:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=60706 00:05:02.640 20:15:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 60706 00:05:02.640 20:15:46 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 60706 ']' 00:05:02.640 20:15:46 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:02.640 20:15:46 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:02.640 20:15:46 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:02.640 20:15:46 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:02.640 20:15:46 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:02.640 20:15:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:02.640 [2024-12-12 20:15:46.842526] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:05:02.640 [2024-12-12 20:15:46.842799] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60706 ] 00:05:02.898 [2024-12-12 20:15:46.999630] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.898 [2024-12-12 20:15:47.081249] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.466 20:15:47 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:03.466 20:15:47 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:03.466 20:15:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 60706 00:05:03.466 20:15:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:03.466 20:15:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 60706 00:05:03.726 20:15:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 60706 00:05:03.726 20:15:47 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 60706 ']' 00:05:03.726 20:15:47 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 60706 00:05:03.726 20:15:47 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:03.726 20:15:47 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:03.726 20:15:47 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60706 00:05:03.726 killing process with pid 60706 00:05:03.726 20:15:47 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:03.726 20:15:47 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:03.726 20:15:47 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60706' 00:05:03.726 20:15:47 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 60706 00:05:03.726 20:15:47 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 60706 00:05:05.108 20:15:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 60706 00:05:05.108 20:15:49 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:05.108 20:15:49 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60706 00:05:05.108 20:15:49 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:05.108 20:15:49 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:05.108 20:15:49 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:05.108 20:15:49 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:05.108 20:15:49 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 60706 00:05:05.108 20:15:49 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 60706 ']' 00:05:05.108 20:15:49 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:05.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.109 ERROR: process (pid: 60706) is no longer running 00:05:05.109 20:15:49 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:05.109 20:15:49 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:05.109 20:15:49 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:05.109 20:15:49 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:05.109 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60706) - No such process 00:05:05.109 20:15:49 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:05.109 20:15:49 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:05.109 20:15:49 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:05.109 20:15:49 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:05.109 20:15:49 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:05.109 20:15:49 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:05.109 20:15:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:05.109 20:15:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:05.109 20:15:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:05.109 20:15:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:05.109 00:05:05.109 real 0m2.484s 00:05:05.109 user 0m2.492s 00:05:05.109 sys 0m0.438s 00:05:05.109 20:15:49 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:05.109 20:15:49 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:05.109 ************************************ 00:05:05.109 END TEST default_locks 00:05:05.109 ************************************ 00:05:05.109 20:15:49 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:05.109 20:15:49 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:05.109 20:15:49 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.109 20:15:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:05.109 ************************************ 00:05:05.109 START TEST default_locks_via_rpc 00:05:05.109 ************************************ 00:05:05.109 20:15:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:05.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.109 20:15:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=60759 00:05:05.109 20:15:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 60759 00:05:05.109 20:15:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60759 ']' 00:05:05.109 20:15:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:05.109 20:15:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:05.109 20:15:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:05.109 20:15:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:05.109 20:15:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:05.109 20:15:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.370 [2024-12-12 20:15:49.367727] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:05:05.370 [2024-12-12 20:15:49.367838] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60759 ] 00:05:05.370 [2024-12-12 20:15:49.526134] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.631 [2024-12-12 20:15:49.629848] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.202 20:15:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:06.203 20:15:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:06.203 20:15:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:06.203 20:15:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:06.203 20:15:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:06.203 20:15:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:06.203 20:15:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:06.203 20:15:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:06.203 20:15:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:06.203 20:15:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:06.203 20:15:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:06.203 20:15:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:06.203 20:15:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:06.203 20:15:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:06.203 20:15:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 60759 00:05:06.203 20:15:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 60759 00:05:06.203 20:15:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:06.464 20:15:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 60759 00:05:06.464 20:15:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 60759 ']' 00:05:06.464 20:15:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 60759 00:05:06.464 20:15:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:06.464 20:15:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:06.464 20:15:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60759 00:05:06.464 killing process with pid 60759 00:05:06.464 20:15:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:06.464 20:15:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:06.464 20:15:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60759' 00:05:06.464 20:15:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 60759 00:05:06.464 20:15:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 60759 00:05:08.378 ************************************ 00:05:08.378 END TEST default_locks_via_rpc 00:05:08.378 ************************************ 00:05:08.378 00:05:08.378 real 0m2.887s 00:05:08.378 user 0m2.842s 00:05:08.378 sys 0m0.485s 00:05:08.378 20:15:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:08.378 20:15:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:08.378 20:15:52 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:08.378 20:15:52 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:08.378 20:15:52 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:08.378 20:15:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:08.378 ************************************ 00:05:08.378 START TEST non_locking_app_on_locked_coremask 00:05:08.378 ************************************ 00:05:08.378 20:15:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:08.378 20:15:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=60822 00:05:08.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:08.378 20:15:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 60822 /var/tmp/spdk.sock 00:05:08.378 20:15:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60822 ']' 00:05:08.378 20:15:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:08.378 20:15:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:08.378 20:15:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:08.378 20:15:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:08.378 20:15:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:08.378 20:15:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:08.378 [2024-12-12 20:15:52.299172] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:05:08.378 [2024-12-12 20:15:52.299432] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60822 ] 00:05:08.378 [2024-12-12 20:15:52.458674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.378 [2024-12-12 20:15:52.572986] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:09.321 20:15:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:09.321 20:15:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:09.321 20:15:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:09.321 20:15:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=60838 00:05:09.321 20:15:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 60838 /var/tmp/spdk2.sock 00:05:09.321 20:15:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60838 ']' 00:05:09.321 20:15:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:09.321 20:15:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:09.321 20:15:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:09.321 20:15:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:09.321 20:15:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:09.321 [2024-12-12 20:15:53.294001] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:05:09.321 [2024-12-12 20:15:53.294287] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60838 ] 00:05:09.321 [2024-12-12 20:15:53.470000] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:09.321 [2024-12-12 20:15:53.470064] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.582 [2024-12-12 20:15:53.689345] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.524 20:15:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:10.524 20:15:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:10.524 20:15:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 60822 00:05:10.524 20:15:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60822 00:05:10.524 20:15:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:10.785 20:15:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 60822 00:05:10.785 20:15:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60822 ']' 00:05:10.785 20:15:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60822 00:05:10.785 20:15:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:10.785 20:15:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:10.785 20:15:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60822 00:05:10.785 killing process with pid 60822 00:05:10.785 20:15:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:10.785 20:15:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:10.785 20:15:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60822' 00:05:10.785 20:15:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60822 00:05:10.785 20:15:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60822 00:05:14.144 20:15:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 60838 00:05:14.144 20:15:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60838 ']' 00:05:14.144 20:15:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60838 00:05:14.144 20:15:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:14.144 20:15:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:14.144 20:15:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60838 00:05:14.144 killing process with pid 60838 00:05:14.144 20:15:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:14.144 20:15:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:14.144 20:15:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60838' 00:05:14.144 20:15:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60838 00:05:14.144 20:15:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60838 00:05:15.088 ************************************ 00:05:15.088 END TEST non_locking_app_on_locked_coremask 00:05:15.088 ************************************ 00:05:15.088 00:05:15.088 real 0m6.873s 00:05:15.088 user 0m7.026s 00:05:15.088 sys 0m0.855s 00:05:15.088 20:15:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:15.088 20:15:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:15.088 20:15:59 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:15.088 20:15:59 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:15.088 20:15:59 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:15.088 20:15:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:15.088 ************************************ 00:05:15.088 START TEST locking_app_on_unlocked_coremask 00:05:15.088 ************************************ 00:05:15.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.088 20:15:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:15.088 20:15:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60940 00:05:15.088 20:15:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:15.088 20:15:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60940 /var/tmp/spdk.sock 00:05:15.088 20:15:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60940 ']' 00:05:15.088 20:15:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.088 20:15:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:15.088 20:15:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.088 20:15:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:15.088 20:15:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:15.088 [2024-12-12 20:15:59.209720] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:05:15.088 [2024-12-12 20:15:59.209841] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60940 ] 00:05:15.349 [2024-12-12 20:15:59.360714] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:15.349 [2024-12-12 20:15:59.361091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.349 [2024-12-12 20:15:59.458357] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:15.922 20:16:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:15.922 20:16:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:15.922 20:16:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:15.922 20:16:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60956 00:05:15.922 20:16:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60956 /var/tmp/spdk2.sock 00:05:15.922 20:16:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60956 ']' 00:05:15.922 20:16:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:15.922 20:16:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:15.922 20:16:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:15.922 20:16:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:15.922 20:16:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:15.922 [2024-12-12 20:16:00.119036] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:05:15.922 [2024-12-12 20:16:00.119315] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60956 ] 00:05:16.183 [2024-12-12 20:16:00.292544] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.444 [2024-12-12 20:16:00.490741] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.831 20:16:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:17.831 20:16:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:17.831 20:16:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60956 00:05:17.831 20:16:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60956 00:05:17.831 20:16:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:17.831 20:16:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60940 00:05:17.831 20:16:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60940 ']' 00:05:17.831 20:16:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60940 00:05:17.831 20:16:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:17.831 20:16:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:17.831 20:16:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60940 00:05:17.831 killing process with pid 60940 00:05:17.831 20:16:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:17.831 20:16:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:17.831 20:16:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60940' 00:05:17.831 20:16:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60940 00:05:17.831 20:16:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60940 00:05:20.386 20:16:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60956 00:05:20.386 20:16:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60956 ']' 00:05:20.386 20:16:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60956 00:05:20.386 20:16:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:20.386 20:16:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:20.386 20:16:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60956 00:05:20.386 killing process with pid 60956 00:05:20.386 20:16:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:20.386 20:16:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:20.386 20:16:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60956' 00:05:20.386 20:16:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60956 00:05:20.386 20:16:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60956 00:05:21.789 00:05:21.789 real 0m6.436s 00:05:21.789 user 0m6.614s 00:05:21.789 sys 0m0.862s 00:05:21.789 20:16:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:21.790 20:16:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:21.790 ************************************ 00:05:21.790 END TEST locking_app_on_unlocked_coremask 00:05:21.790 ************************************ 00:05:21.790 20:16:05 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:21.790 20:16:05 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:21.790 20:16:05 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.790 20:16:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:21.790 ************************************ 00:05:21.790 START TEST locking_app_on_locked_coremask 00:05:21.790 ************************************ 00:05:21.790 20:16:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:21.790 20:16:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=61047 00:05:21.790 20:16:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 61047 /var/tmp/spdk.sock 00:05:21.790 20:16:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:21.790 20:16:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61047 ']' 00:05:21.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.790 20:16:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.790 20:16:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:21.790 20:16:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.790 20:16:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:21.790 20:16:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:21.790 [2024-12-12 20:16:05.693048] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:05:21.790 [2024-12-12 20:16:05.693169] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61047 ] 00:05:21.790 [2024-12-12 20:16:05.849315] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.790 [2024-12-12 20:16:05.936159] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.363 20:16:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:22.363 20:16:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:22.363 20:16:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:22.363 20:16:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=61063 00:05:22.363 20:16:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 61063 /var/tmp/spdk2.sock 00:05:22.363 20:16:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:22.363 20:16:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 61063 /var/tmp/spdk2.sock 00:05:22.363 20:16:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:22.363 20:16:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:22.363 20:16:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:22.363 20:16:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:22.363 20:16:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 61063 /var/tmp/spdk2.sock 00:05:22.363 20:16:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61063 ']' 00:05:22.363 20:16:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:22.363 20:16:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:22.363 20:16:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:22.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:22.363 20:16:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:22.363 20:16:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:22.363 [2024-12-12 20:16:06.588521] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:05:22.363 [2024-12-12 20:16:06.588813] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61063 ] 00:05:22.624 [2024-12-12 20:16:06.750799] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 61047 has claimed it. 00:05:22.624 [2024-12-12 20:16:06.750854] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:23.197 ERROR: process (pid: 61063) is no longer running 00:05:23.197 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (61063) - No such process 00:05:23.197 20:16:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:23.197 20:16:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:23.197 20:16:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:23.197 20:16:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:23.197 20:16:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:23.197 20:16:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:23.197 20:16:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 61047 00:05:23.197 20:16:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61047 00:05:23.197 20:16:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:23.458 20:16:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 61047 00:05:23.458 20:16:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 61047 ']' 00:05:23.458 20:16:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 61047 00:05:23.458 20:16:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:23.458 20:16:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:23.458 20:16:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61047 00:05:23.458 killing process with pid 61047 00:05:23.458 20:16:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:23.458 20:16:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:23.458 20:16:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61047' 00:05:23.458 20:16:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 61047 00:05:23.458 20:16:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 61047 00:05:24.840 00:05:24.840 real 0m3.101s 00:05:24.840 user 0m3.309s 00:05:24.840 sys 0m0.553s 00:05:24.840 20:16:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:24.840 ************************************ 00:05:24.840 20:16:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:24.840 END TEST locking_app_on_locked_coremask 00:05:24.840 ************************************ 00:05:24.840 20:16:08 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:24.840 20:16:08 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:24.840 20:16:08 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:24.840 20:16:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:24.840 ************************************ 00:05:24.840 START TEST locking_overlapped_coremask 00:05:24.840 ************************************ 00:05:24.840 20:16:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:24.840 20:16:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=61116 00:05:24.840 20:16:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 61116 /var/tmp/spdk.sock 00:05:24.840 20:16:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:24.840 20:16:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 61116 ']' 00:05:24.840 20:16:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.840 20:16:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:24.840 20:16:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.840 20:16:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:24.840 20:16:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:24.840 [2024-12-12 20:16:08.838531] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:05:24.840 [2024-12-12 20:16:08.838822] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61116 ] 00:05:24.840 [2024-12-12 20:16:08.999813] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:25.100 [2024-12-12 20:16:09.106772] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:25.100 [2024-12-12 20:16:09.107033] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.100 [2024-12-12 20:16:09.107048] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:25.669 20:16:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:25.669 20:16:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:25.669 20:16:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=61134 00:05:25.669 20:16:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 61134 /var/tmp/spdk2.sock 00:05:25.669 20:16:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:25.669 20:16:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 61134 /var/tmp/spdk2.sock 00:05:25.669 20:16:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:25.669 20:16:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:25.669 20:16:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:25.669 20:16:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:25.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:25.669 20:16:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:25.669 20:16:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 61134 /var/tmp/spdk2.sock 00:05:25.669 20:16:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 61134 ']' 00:05:25.669 20:16:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:25.669 20:16:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:25.670 20:16:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:25.670 20:16:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:25.670 20:16:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:25.670 [2024-12-12 20:16:09.845200] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:05:25.670 [2024-12-12 20:16:09.845319] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61134 ] 00:05:25.930 [2024-12-12 20:16:10.017782] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 61116 has claimed it. 00:05:25.930 [2024-12-12 20:16:10.017846] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:26.501 ERROR: process (pid: 61134) is no longer running 00:05:26.501 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (61134) - No such process 00:05:26.501 20:16:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:26.501 20:16:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:26.501 20:16:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:26.502 20:16:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:26.502 20:16:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:26.502 20:16:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:26.502 20:16:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:26.502 20:16:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:26.502 20:16:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:26.502 20:16:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:26.502 20:16:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 61116 00:05:26.502 20:16:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 61116 ']' 00:05:26.502 20:16:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 61116 00:05:26.502 20:16:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:26.502 20:16:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:26.502 20:16:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61116 00:05:26.502 20:16:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:26.502 20:16:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:26.502 20:16:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61116' 00:05:26.502 killing process with pid 61116 00:05:26.502 20:16:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 61116 00:05:26.502 20:16:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 61116 00:05:27.897 00:05:27.897 real 0m3.359s 00:05:27.897 user 0m9.094s 00:05:27.897 sys 0m0.503s 00:05:27.897 20:16:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:27.897 20:16:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:27.897 ************************************ 00:05:27.897 END TEST locking_overlapped_coremask 00:05:27.897 ************************************ 00:05:28.155 20:16:12 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:28.156 20:16:12 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:28.156 20:16:12 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.156 20:16:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:28.156 ************************************ 00:05:28.156 START TEST locking_overlapped_coremask_via_rpc 00:05:28.156 ************************************ 00:05:28.156 20:16:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:28.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.156 20:16:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=61193 00:05:28.156 20:16:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 61193 /var/tmp/spdk.sock 00:05:28.156 20:16:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61193 ']' 00:05:28.156 20:16:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.156 20:16:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:28.156 20:16:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.156 20:16:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:28.156 20:16:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.156 20:16:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:28.156 [2024-12-12 20:16:12.238279] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:05:28.156 [2024-12-12 20:16:12.238571] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61193 ] 00:05:28.416 [2024-12-12 20:16:12.395813] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:28.416 [2024-12-12 20:16:12.395857] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:28.416 [2024-12-12 20:16:12.497907] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:28.416 [2024-12-12 20:16:12.498186] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.416 [2024-12-12 20:16:12.498201] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:28.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:28.988 20:16:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:28.988 20:16:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:28.988 20:16:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:28.988 20:16:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=61211 00:05:28.988 20:16:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 61211 /var/tmp/spdk2.sock 00:05:28.988 20:16:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61211 ']' 00:05:28.988 20:16:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:28.988 20:16:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:28.988 20:16:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:28.988 20:16:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:28.988 20:16:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.988 [2024-12-12 20:16:13.168052] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:05:28.988 [2024-12-12 20:16:13.168393] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61211 ] 00:05:29.249 [2024-12-12 20:16:13.344811] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:29.249 [2024-12-12 20:16:13.344871] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:29.510 [2024-12-12 20:16:13.550902] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:05:29.510 [2024-12-12 20:16:13.550965] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:29.510 [2024-12-12 20:16:13.550985] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:05:30.450 20:16:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:30.450 20:16:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:30.450 20:16:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:30.450 20:16:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.450 20:16:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.450 20:16:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.450 20:16:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:30.450 20:16:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:30.450 20:16:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:30.450 20:16:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:30.450 20:16:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:30.450 20:16:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:30.450 20:16:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:30.450 20:16:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:30.450 20:16:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.450 20:16:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.450 [2024-12-12 20:16:14.656530] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 61193 has claimed it. 00:05:30.450 request: 00:05:30.450 { 00:05:30.450 "method": "framework_enable_cpumask_locks", 00:05:30.450 "req_id": 1 00:05:30.450 } 00:05:30.450 Got JSON-RPC error response 00:05:30.450 response: 00:05:30.450 { 00:05:30.450 "code": -32603, 00:05:30.450 "message": "Failed to claim CPU core: 2" 00:05:30.450 } 00:05:30.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.450 20:16:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:30.450 20:16:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:30.450 20:16:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:30.450 20:16:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:30.450 20:16:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:30.450 20:16:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 61193 /var/tmp/spdk.sock 00:05:30.450 20:16:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61193 ']' 00:05:30.450 20:16:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.450 20:16:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:30.450 20:16:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.450 20:16:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:30.450 20:16:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:30.708 20:16:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:30.708 20:16:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:30.708 20:16:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 61211 /var/tmp/spdk2.sock 00:05:30.708 20:16:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61211 ']' 00:05:30.708 20:16:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:30.708 20:16:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:30.708 20:16:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:30.708 20:16:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:30.708 20:16:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.966 ************************************ 00:05:30.966 END TEST locking_overlapped_coremask_via_rpc 00:05:30.966 ************************************ 00:05:30.966 20:16:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:30.966 20:16:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:30.966 20:16:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:30.966 20:16:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:30.966 20:16:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:30.966 20:16:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:30.966 00:05:30.966 real 0m2.968s 00:05:30.966 user 0m1.118s 00:05:30.966 sys 0m0.139s 00:05:30.966 20:16:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:30.966 20:16:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.966 20:16:15 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:30.966 20:16:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61193 ]] 00:05:30.966 20:16:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61193 00:05:30.966 20:16:15 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61193 ']' 00:05:30.966 20:16:15 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61193 00:05:30.966 20:16:15 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:30.966 20:16:15 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:30.966 20:16:15 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61193 00:05:30.966 killing process with pid 61193 00:05:30.966 20:16:15 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:30.966 20:16:15 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:30.966 20:16:15 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61193' 00:05:30.966 20:16:15 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 61193 00:05:30.966 20:16:15 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 61193 00:05:32.869 20:16:16 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61211 ]] 00:05:32.869 20:16:16 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61211 00:05:32.869 20:16:16 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61211 ']' 00:05:32.869 20:16:16 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61211 00:05:32.869 20:16:16 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:32.869 20:16:16 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:32.869 20:16:16 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61211 00:05:32.869 killing process with pid 61211 00:05:32.869 20:16:16 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:32.869 20:16:16 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:32.869 20:16:16 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61211' 00:05:32.869 20:16:16 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 61211 00:05:32.869 20:16:16 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 61211 00:05:33.808 20:16:17 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:33.808 20:16:17 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:33.808 20:16:17 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61193 ]] 00:05:33.808 20:16:17 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61193 00:05:33.808 20:16:17 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61193 ']' 00:05:33.808 Process with pid 61193 is not found 00:05:33.808 20:16:17 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61193 00:05:33.808 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (61193) - No such process 00:05:33.808 20:16:17 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 61193 is not found' 00:05:33.808 20:16:17 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61211 ]] 00:05:33.808 20:16:17 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61211 00:05:33.808 20:16:17 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61211 ']' 00:05:33.808 20:16:17 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61211 00:05:33.808 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (61211) - No such process 00:05:33.808 Process with pid 61211 is not found 00:05:33.808 20:16:17 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 61211 is not found' 00:05:33.808 20:16:17 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:33.808 ************************************ 00:05:33.808 END TEST cpu_locks 00:05:33.808 ************************************ 00:05:33.808 00:05:33.808 real 0m31.230s 00:05:33.808 user 0m54.041s 00:05:33.808 sys 0m4.639s 00:05:33.808 20:16:17 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:33.808 20:16:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:33.808 ************************************ 00:05:33.808 END TEST event 00:05:33.808 ************************************ 00:05:33.808 00:05:33.808 real 0m57.563s 00:05:33.808 user 1m46.093s 00:05:33.808 sys 0m7.369s 00:05:33.808 20:16:17 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:33.808 20:16:17 event -- common/autotest_common.sh@10 -- # set +x 00:05:33.808 20:16:17 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:33.808 20:16:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:33.808 20:16:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:33.808 20:16:17 -- common/autotest_common.sh@10 -- # set +x 00:05:33.808 ************************************ 00:05:33.808 START TEST thread 00:05:33.808 ************************************ 00:05:33.808 20:16:17 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:33.808 * Looking for test storage... 00:05:33.808 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:33.808 20:16:17 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:33.808 20:16:17 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:05:33.808 20:16:17 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:34.070 20:16:18 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:34.070 20:16:18 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:34.070 20:16:18 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:34.070 20:16:18 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:34.070 20:16:18 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:34.070 20:16:18 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:34.070 20:16:18 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:34.070 20:16:18 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:34.070 20:16:18 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:34.070 20:16:18 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:34.070 20:16:18 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:34.070 20:16:18 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:34.070 20:16:18 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:34.070 20:16:18 thread -- scripts/common.sh@345 -- # : 1 00:05:34.070 20:16:18 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:34.070 20:16:18 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:34.070 20:16:18 thread -- scripts/common.sh@365 -- # decimal 1 00:05:34.070 20:16:18 thread -- scripts/common.sh@353 -- # local d=1 00:05:34.070 20:16:18 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:34.070 20:16:18 thread -- scripts/common.sh@355 -- # echo 1 00:05:34.070 20:16:18 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:34.070 20:16:18 thread -- scripts/common.sh@366 -- # decimal 2 00:05:34.070 20:16:18 thread -- scripts/common.sh@353 -- # local d=2 00:05:34.070 20:16:18 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:34.070 20:16:18 thread -- scripts/common.sh@355 -- # echo 2 00:05:34.070 20:16:18 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:34.070 20:16:18 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:34.070 20:16:18 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:34.070 20:16:18 thread -- scripts/common.sh@368 -- # return 0 00:05:34.070 20:16:18 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:34.070 20:16:18 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:34.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.070 --rc genhtml_branch_coverage=1 00:05:34.070 --rc genhtml_function_coverage=1 00:05:34.070 --rc genhtml_legend=1 00:05:34.070 --rc geninfo_all_blocks=1 00:05:34.070 --rc geninfo_unexecuted_blocks=1 00:05:34.070 00:05:34.070 ' 00:05:34.070 20:16:18 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:34.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.070 --rc genhtml_branch_coverage=1 00:05:34.070 --rc genhtml_function_coverage=1 00:05:34.070 --rc genhtml_legend=1 00:05:34.070 --rc geninfo_all_blocks=1 00:05:34.070 --rc geninfo_unexecuted_blocks=1 00:05:34.070 00:05:34.070 ' 00:05:34.070 20:16:18 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:34.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.070 --rc genhtml_branch_coverage=1 00:05:34.070 --rc genhtml_function_coverage=1 00:05:34.070 --rc genhtml_legend=1 00:05:34.070 --rc geninfo_all_blocks=1 00:05:34.070 --rc geninfo_unexecuted_blocks=1 00:05:34.070 00:05:34.070 ' 00:05:34.070 20:16:18 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:34.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.070 --rc genhtml_branch_coverage=1 00:05:34.070 --rc genhtml_function_coverage=1 00:05:34.070 --rc genhtml_legend=1 00:05:34.070 --rc geninfo_all_blocks=1 00:05:34.070 --rc geninfo_unexecuted_blocks=1 00:05:34.070 00:05:34.070 ' 00:05:34.070 20:16:18 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:34.070 20:16:18 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:34.070 20:16:18 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.070 20:16:18 thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.070 ************************************ 00:05:34.070 START TEST thread_poller_perf 00:05:34.070 ************************************ 00:05:34.070 20:16:18 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:34.070 [2024-12-12 20:16:18.100623] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:05:34.070 [2024-12-12 20:16:18.100850] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61365 ] 00:05:34.070 [2024-12-12 20:16:18.261974] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.331 [2024-12-12 20:16:18.382931] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.331 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:35.745 [2024-12-12T20:16:19.973Z] ====================================== 00:05:35.745 [2024-12-12T20:16:19.973Z] busy:2615255642 (cyc) 00:05:35.745 [2024-12-12T20:16:19.973Z] total_run_count: 306000 00:05:35.745 [2024-12-12T20:16:19.973Z] tsc_hz: 2600000000 (cyc) 00:05:35.745 [2024-12-12T20:16:19.973Z] ====================================== 00:05:35.745 [2024-12-12T20:16:19.973Z] poller_cost: 8546 (cyc), 3286 (nsec) 00:05:35.745 00:05:35.745 real 0m1.495s 00:05:35.745 user 0m1.302s 00:05:35.745 sys 0m0.083s 00:05:35.745 20:16:19 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:35.745 20:16:19 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:35.745 ************************************ 00:05:35.745 END TEST thread_poller_perf 00:05:35.745 ************************************ 00:05:35.745 20:16:19 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:35.745 20:16:19 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:35.745 20:16:19 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:35.745 20:16:19 thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.745 ************************************ 00:05:35.745 START TEST thread_poller_perf 00:05:35.745 ************************************ 00:05:35.745 20:16:19 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:35.745 [2024-12-12 20:16:19.670613] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:05:35.745 [2024-12-12 20:16:19.670748] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61402 ] 00:05:35.745 [2024-12-12 20:16:19.832109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.745 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:35.745 [2024-12-12 20:16:19.954809] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.125 [2024-12-12T20:16:21.353Z] ====================================== 00:05:37.125 [2024-12-12T20:16:21.353Z] busy:2604327442 (cyc) 00:05:37.125 [2024-12-12T20:16:21.353Z] total_run_count: 3621000 00:05:37.125 [2024-12-12T20:16:21.353Z] tsc_hz: 2600000000 (cyc) 00:05:37.125 [2024-12-12T20:16:21.353Z] ====================================== 00:05:37.125 [2024-12-12T20:16:21.353Z] poller_cost: 719 (cyc), 276 (nsec) 00:05:37.125 00:05:37.125 real 0m1.490s 00:05:37.125 user 0m1.298s 00:05:37.125 sys 0m0.084s 00:05:37.125 20:16:21 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:37.125 ************************************ 00:05:37.125 END TEST thread_poller_perf 00:05:37.125 ************************************ 00:05:37.125 20:16:21 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:37.125 20:16:21 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:37.125 ************************************ 00:05:37.125 END TEST thread 00:05:37.125 ************************************ 00:05:37.125 00:05:37.125 real 0m3.251s 00:05:37.125 user 0m2.710s 00:05:37.125 sys 0m0.299s 00:05:37.125 20:16:21 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:37.125 20:16:21 thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.125 20:16:21 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:37.125 20:16:21 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:37.125 20:16:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:37.125 20:16:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:37.125 20:16:21 -- common/autotest_common.sh@10 -- # set +x 00:05:37.125 ************************************ 00:05:37.125 START TEST app_cmdline 00:05:37.125 ************************************ 00:05:37.125 20:16:21 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:37.125 * Looking for test storage... 00:05:37.125 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:37.125 20:16:21 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:37.125 20:16:21 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:05:37.125 20:16:21 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:37.125 20:16:21 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:37.125 20:16:21 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:37.125 20:16:21 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:37.125 20:16:21 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:37.125 20:16:21 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:37.125 20:16:21 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:37.125 20:16:21 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:37.125 20:16:21 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:37.125 20:16:21 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:37.125 20:16:21 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:37.125 20:16:21 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:37.125 20:16:21 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:37.125 20:16:21 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:37.125 20:16:21 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:37.125 20:16:21 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:37.125 20:16:21 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:37.125 20:16:21 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:37.125 20:16:21 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:37.125 20:16:21 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:37.125 20:16:21 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:37.125 20:16:21 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:37.125 20:16:21 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:37.126 20:16:21 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:37.126 20:16:21 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:37.126 20:16:21 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:37.126 20:16:21 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:37.126 20:16:21 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:37.126 20:16:21 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:37.126 20:16:21 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:37.126 20:16:21 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:37.126 20:16:21 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:37.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.126 --rc genhtml_branch_coverage=1 00:05:37.126 --rc genhtml_function_coverage=1 00:05:37.126 --rc genhtml_legend=1 00:05:37.126 --rc geninfo_all_blocks=1 00:05:37.126 --rc geninfo_unexecuted_blocks=1 00:05:37.126 00:05:37.126 ' 00:05:37.126 20:16:21 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:37.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.126 --rc genhtml_branch_coverage=1 00:05:37.126 --rc genhtml_function_coverage=1 00:05:37.126 --rc genhtml_legend=1 00:05:37.126 --rc geninfo_all_blocks=1 00:05:37.126 --rc geninfo_unexecuted_blocks=1 00:05:37.126 00:05:37.126 ' 00:05:37.126 20:16:21 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:37.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.126 --rc genhtml_branch_coverage=1 00:05:37.126 --rc genhtml_function_coverage=1 00:05:37.126 --rc genhtml_legend=1 00:05:37.126 --rc geninfo_all_blocks=1 00:05:37.126 --rc geninfo_unexecuted_blocks=1 00:05:37.126 00:05:37.126 ' 00:05:37.126 20:16:21 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:37.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.126 --rc genhtml_branch_coverage=1 00:05:37.126 --rc genhtml_function_coverage=1 00:05:37.126 --rc genhtml_legend=1 00:05:37.126 --rc geninfo_all_blocks=1 00:05:37.126 --rc geninfo_unexecuted_blocks=1 00:05:37.126 00:05:37.126 ' 00:05:37.126 20:16:21 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:37.126 20:16:21 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=61491 00:05:37.126 20:16:21 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 61491 00:05:37.126 20:16:21 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 61491 ']' 00:05:37.126 20:16:21 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.126 20:16:21 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:37.126 20:16:21 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:37.126 20:16:21 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.126 20:16:21 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:37.126 20:16:21 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:37.385 [2024-12-12 20:16:21.391621] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:05:37.385 [2024-12-12 20:16:21.391894] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61491 ] 00:05:37.385 [2024-12-12 20:16:21.550729] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.645 [2024-12-12 20:16:21.664607] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.214 20:16:22 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:38.214 20:16:22 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:05:38.214 20:16:22 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:05:38.474 { 00:05:38.474 "version": "SPDK v25.01-pre git sha1 dc2db8405", 00:05:38.474 "fields": { 00:05:38.474 "major": 25, 00:05:38.474 "minor": 1, 00:05:38.474 "patch": 0, 00:05:38.474 "suffix": "-pre", 00:05:38.474 "commit": "dc2db8405" 00:05:38.474 } 00:05:38.474 } 00:05:38.474 20:16:22 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:38.474 20:16:22 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:38.474 20:16:22 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:38.474 20:16:22 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:38.474 20:16:22 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:38.474 20:16:22 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:38.474 20:16:22 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:38.474 20:16:22 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:38.474 20:16:22 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:38.474 20:16:22 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:38.474 20:16:22 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:38.474 20:16:22 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:38.474 20:16:22 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:38.474 20:16:22 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:05:38.474 20:16:22 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:38.474 20:16:22 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:38.474 20:16:22 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:38.474 20:16:22 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:38.474 20:16:22 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:38.474 20:16:22 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:38.474 20:16:22 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:38.474 20:16:22 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:38.474 20:16:22 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:05:38.474 20:16:22 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:38.734 request: 00:05:38.734 { 00:05:38.734 "method": "env_dpdk_get_mem_stats", 00:05:38.734 "req_id": 1 00:05:38.734 } 00:05:38.734 Got JSON-RPC error response 00:05:38.734 response: 00:05:38.734 { 00:05:38.734 "code": -32601, 00:05:38.734 "message": "Method not found" 00:05:38.734 } 00:05:38.734 20:16:22 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:05:38.734 20:16:22 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:38.734 20:16:22 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:38.734 20:16:22 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:38.734 20:16:22 app_cmdline -- app/cmdline.sh@1 -- # killprocess 61491 00:05:38.734 20:16:22 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 61491 ']' 00:05:38.734 20:16:22 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 61491 00:05:38.734 20:16:22 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:05:38.734 20:16:22 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:38.734 20:16:22 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61491 00:05:38.734 killing process with pid 61491 00:05:38.734 20:16:22 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:38.734 20:16:22 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:38.734 20:16:22 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61491' 00:05:38.734 20:16:22 app_cmdline -- common/autotest_common.sh@973 -- # kill 61491 00:05:38.734 20:16:22 app_cmdline -- common/autotest_common.sh@978 -- # wait 61491 00:05:40.640 00:05:40.640 real 0m3.187s 00:05:40.640 user 0m3.354s 00:05:40.640 sys 0m0.516s 00:05:40.640 20:16:24 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.640 ************************************ 00:05:40.640 END TEST app_cmdline 00:05:40.640 ************************************ 00:05:40.640 20:16:24 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:40.640 20:16:24 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:40.640 20:16:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:40.640 20:16:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.640 20:16:24 -- common/autotest_common.sh@10 -- # set +x 00:05:40.640 ************************************ 00:05:40.640 START TEST version 00:05:40.640 ************************************ 00:05:40.640 20:16:24 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:40.640 * Looking for test storage... 00:05:40.640 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:40.640 20:16:24 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:40.640 20:16:24 version -- common/autotest_common.sh@1711 -- # lcov --version 00:05:40.640 20:16:24 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:40.640 20:16:24 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:40.640 20:16:24 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:40.640 20:16:24 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:40.640 20:16:24 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:40.640 20:16:24 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:40.640 20:16:24 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:40.640 20:16:24 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:40.640 20:16:24 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:40.640 20:16:24 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:40.640 20:16:24 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:40.640 20:16:24 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:40.640 20:16:24 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:40.640 20:16:24 version -- scripts/common.sh@344 -- # case "$op" in 00:05:40.640 20:16:24 version -- scripts/common.sh@345 -- # : 1 00:05:40.640 20:16:24 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:40.640 20:16:24 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:40.640 20:16:24 version -- scripts/common.sh@365 -- # decimal 1 00:05:40.640 20:16:24 version -- scripts/common.sh@353 -- # local d=1 00:05:40.640 20:16:24 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:40.640 20:16:24 version -- scripts/common.sh@355 -- # echo 1 00:05:40.640 20:16:24 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:40.640 20:16:24 version -- scripts/common.sh@366 -- # decimal 2 00:05:40.640 20:16:24 version -- scripts/common.sh@353 -- # local d=2 00:05:40.640 20:16:24 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:40.640 20:16:24 version -- scripts/common.sh@355 -- # echo 2 00:05:40.640 20:16:24 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:40.640 20:16:24 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:40.640 20:16:24 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:40.640 20:16:24 version -- scripts/common.sh@368 -- # return 0 00:05:40.640 20:16:24 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:40.640 20:16:24 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:40.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.640 --rc genhtml_branch_coverage=1 00:05:40.640 --rc genhtml_function_coverage=1 00:05:40.640 --rc genhtml_legend=1 00:05:40.640 --rc geninfo_all_blocks=1 00:05:40.640 --rc geninfo_unexecuted_blocks=1 00:05:40.640 00:05:40.640 ' 00:05:40.640 20:16:24 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:40.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.640 --rc genhtml_branch_coverage=1 00:05:40.640 --rc genhtml_function_coverage=1 00:05:40.640 --rc genhtml_legend=1 00:05:40.640 --rc geninfo_all_blocks=1 00:05:40.640 --rc geninfo_unexecuted_blocks=1 00:05:40.640 00:05:40.640 ' 00:05:40.640 20:16:24 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:40.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.640 --rc genhtml_branch_coverage=1 00:05:40.640 --rc genhtml_function_coverage=1 00:05:40.640 --rc genhtml_legend=1 00:05:40.640 --rc geninfo_all_blocks=1 00:05:40.640 --rc geninfo_unexecuted_blocks=1 00:05:40.640 00:05:40.640 ' 00:05:40.640 20:16:24 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:40.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.640 --rc genhtml_branch_coverage=1 00:05:40.640 --rc genhtml_function_coverage=1 00:05:40.640 --rc genhtml_legend=1 00:05:40.640 --rc geninfo_all_blocks=1 00:05:40.640 --rc geninfo_unexecuted_blocks=1 00:05:40.640 00:05:40.640 ' 00:05:40.640 20:16:24 version -- app/version.sh@17 -- # get_header_version major 00:05:40.640 20:16:24 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:40.640 20:16:24 version -- app/version.sh@14 -- # tr -d '"' 00:05:40.640 20:16:24 version -- app/version.sh@14 -- # cut -f2 00:05:40.640 20:16:24 version -- app/version.sh@17 -- # major=25 00:05:40.640 20:16:24 version -- app/version.sh@18 -- # get_header_version minor 00:05:40.640 20:16:24 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:40.640 20:16:24 version -- app/version.sh@14 -- # cut -f2 00:05:40.640 20:16:24 version -- app/version.sh@14 -- # tr -d '"' 00:05:40.640 20:16:24 version -- app/version.sh@18 -- # minor=1 00:05:40.640 20:16:24 version -- app/version.sh@19 -- # get_header_version patch 00:05:40.640 20:16:24 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:40.640 20:16:24 version -- app/version.sh@14 -- # cut -f2 00:05:40.640 20:16:24 version -- app/version.sh@14 -- # tr -d '"' 00:05:40.640 20:16:24 version -- app/version.sh@19 -- # patch=0 00:05:40.640 20:16:24 version -- app/version.sh@20 -- # get_header_version suffix 00:05:40.640 20:16:24 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:40.640 20:16:24 version -- app/version.sh@14 -- # cut -f2 00:05:40.640 20:16:24 version -- app/version.sh@14 -- # tr -d '"' 00:05:40.640 20:16:24 version -- app/version.sh@20 -- # suffix=-pre 00:05:40.640 20:16:24 version -- app/version.sh@22 -- # version=25.1 00:05:40.640 20:16:24 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:40.640 20:16:24 version -- app/version.sh@28 -- # version=25.1rc0 00:05:40.640 20:16:24 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:40.640 20:16:24 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:40.641 20:16:24 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:40.641 20:16:24 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:40.641 ************************************ 00:05:40.641 END TEST version 00:05:40.641 ************************************ 00:05:40.641 00:05:40.641 real 0m0.191s 00:05:40.641 user 0m0.124s 00:05:40.641 sys 0m0.093s 00:05:40.641 20:16:24 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.641 20:16:24 version -- common/autotest_common.sh@10 -- # set +x 00:05:40.641 20:16:24 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:40.641 20:16:24 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:40.641 20:16:24 -- spdk/autotest.sh@194 -- # uname -s 00:05:40.641 20:16:24 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:40.641 20:16:24 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:40.641 20:16:24 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:40.641 20:16:24 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:05:40.641 20:16:24 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:05:40.641 20:16:24 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:40.641 20:16:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.641 20:16:24 -- common/autotest_common.sh@10 -- # set +x 00:05:40.641 ************************************ 00:05:40.641 START TEST blockdev_nvme 00:05:40.641 ************************************ 00:05:40.641 20:16:24 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:05:40.641 * Looking for test storage... 00:05:40.641 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:05:40.641 20:16:24 blockdev_nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:40.641 20:16:24 blockdev_nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:05:40.641 20:16:24 blockdev_nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:40.641 20:16:24 blockdev_nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:40.641 20:16:24 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:40.641 20:16:24 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:40.641 20:16:24 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:40.641 20:16:24 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:05:40.641 20:16:24 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:05:40.641 20:16:24 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:05:40.641 20:16:24 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:05:40.641 20:16:24 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:05:40.641 20:16:24 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:05:40.641 20:16:24 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:05:40.641 20:16:24 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:40.641 20:16:24 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:05:40.641 20:16:24 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:05:40.641 20:16:24 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:40.641 20:16:24 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:40.641 20:16:24 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:05:40.641 20:16:24 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:05:40.641 20:16:24 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:40.641 20:16:24 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:05:40.641 20:16:24 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:05:40.641 20:16:24 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:05:40.641 20:16:24 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:05:40.641 20:16:24 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:40.641 20:16:24 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:05:40.641 20:16:24 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:05:40.641 20:16:24 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:40.641 20:16:24 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:40.641 20:16:24 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:05:40.641 20:16:24 blockdev_nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:40.641 20:16:24 blockdev_nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:40.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.641 --rc genhtml_branch_coverage=1 00:05:40.641 --rc genhtml_function_coverage=1 00:05:40.641 --rc genhtml_legend=1 00:05:40.641 --rc geninfo_all_blocks=1 00:05:40.641 --rc geninfo_unexecuted_blocks=1 00:05:40.641 00:05:40.641 ' 00:05:40.641 20:16:24 blockdev_nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:40.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.641 --rc genhtml_branch_coverage=1 00:05:40.641 --rc genhtml_function_coverage=1 00:05:40.641 --rc genhtml_legend=1 00:05:40.641 --rc geninfo_all_blocks=1 00:05:40.641 --rc geninfo_unexecuted_blocks=1 00:05:40.641 00:05:40.641 ' 00:05:40.641 20:16:24 blockdev_nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:40.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.641 --rc genhtml_branch_coverage=1 00:05:40.641 --rc genhtml_function_coverage=1 00:05:40.641 --rc genhtml_legend=1 00:05:40.641 --rc geninfo_all_blocks=1 00:05:40.641 --rc geninfo_unexecuted_blocks=1 00:05:40.641 00:05:40.641 ' 00:05:40.641 20:16:24 blockdev_nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:40.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.641 --rc genhtml_branch_coverage=1 00:05:40.641 --rc genhtml_function_coverage=1 00:05:40.641 --rc genhtml_legend=1 00:05:40.641 --rc geninfo_all_blocks=1 00:05:40.641 --rc geninfo_unexecuted_blocks=1 00:05:40.641 00:05:40.641 ' 00:05:40.641 20:16:24 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:40.641 20:16:24 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:05:40.641 20:16:24 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:05:40.641 20:16:24 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:05:40.641 20:16:24 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:05:40.641 20:16:24 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:05:40.641 20:16:24 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:05:40.641 20:16:24 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:05:40.641 20:16:24 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:05:40.641 20:16:24 blockdev_nvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:05:40.641 20:16:24 blockdev_nvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:05:40.641 20:16:24 blockdev_nvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:05:40.641 20:16:24 blockdev_nvme -- bdev/blockdev.sh@711 -- # uname -s 00:05:40.641 20:16:24 blockdev_nvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:05:40.641 20:16:24 blockdev_nvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:05:40.641 20:16:24 blockdev_nvme -- bdev/blockdev.sh@719 -- # test_type=nvme 00:05:40.641 20:16:24 blockdev_nvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:05:40.641 20:16:24 blockdev_nvme -- bdev/blockdev.sh@721 -- # dek= 00:05:40.641 20:16:24 blockdev_nvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:05:40.641 20:16:24 blockdev_nvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:05:40.641 20:16:24 blockdev_nvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:05:40.641 20:16:24 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == bdev ]] 00:05:40.641 20:16:24 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == crypto_* ]] 00:05:40.641 20:16:24 blockdev_nvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:05:40.641 20:16:24 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=61663 00:05:40.641 20:16:24 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:05:40.641 20:16:24 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:05:40.641 20:16:24 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 61663 00:05:40.641 20:16:24 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 61663 ']' 00:05:40.641 20:16:24 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.641 20:16:24 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:40.641 20:16:24 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.641 20:16:24 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:40.641 20:16:24 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:40.911 [2024-12-12 20:16:24.887159] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:05:40.911 [2024-12-12 20:16:24.887406] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61663 ] 00:05:40.911 [2024-12-12 20:16:25.045030] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.169 [2024-12-12 20:16:25.141907] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.738 20:16:25 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:41.738 20:16:25 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:05:41.738 20:16:25 blockdev_nvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:05:41.738 20:16:25 blockdev_nvme -- bdev/blockdev.sh@736 -- # setup_nvme_conf 00:05:41.738 20:16:25 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:05:41.738 20:16:25 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:05:41.738 20:16:25 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:41.738 20:16:25 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:05:41.738 20:16:25 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.738 20:16:25 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:41.996 20:16:26 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.996 20:16:26 blockdev_nvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:05:41.996 20:16:26 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.996 20:16:26 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:41.997 20:16:26 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.997 20:16:26 blockdev_nvme -- bdev/blockdev.sh@777 -- # cat 00:05:41.997 20:16:26 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:05:41.997 20:16:26 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.997 20:16:26 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:41.997 20:16:26 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.997 20:16:26 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:05:41.997 20:16:26 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.997 20:16:26 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:41.997 20:16:26 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.997 20:16:26 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:05:41.997 20:16:26 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.997 20:16:26 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:41.997 20:16:26 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.997 20:16:26 blockdev_nvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:05:41.997 20:16:26 blockdev_nvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:05:41.997 20:16:26 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.997 20:16:26 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:41.997 20:16:26 blockdev_nvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:05:41.997 20:16:26 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.997 20:16:26 blockdev_nvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:05:41.997 20:16:26 blockdev_nvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:05:41.997 20:16:26 blockdev_nvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "74ed38d7-3b4f-4609-a536-157e3a8688eb"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "74ed38d7-3b4f-4609-a536-157e3a8688eb",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "ce281c9a-a7db-4aa6-a2ca-24e6ad6c3a61"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "ce281c9a-a7db-4aa6-a2ca-24e6ad6c3a61",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "cc9524e1-fd19-49cc-9396-af4470697b5d"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "cc9524e1-fd19-49cc-9396-af4470697b5d",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "179c4231-2000-47f0-b704-c744026f4ac6"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "179c4231-2000-47f0-b704-c744026f4ac6",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "94cbcc06-d95b-478f-9227-85875a88b264"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "94cbcc06-d95b-478f-9227-85875a88b264",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "cd3b760a-7472-4ad0-a166-537c645bce49"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "cd3b760a-7472-4ad0-a166-537c645bce49",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:05:41.997 20:16:26 blockdev_nvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:05:41.997 20:16:26 blockdev_nvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:05:41.997 20:16:26 blockdev_nvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:05:41.997 20:16:26 blockdev_nvme -- bdev/blockdev.sh@791 -- # killprocess 61663 00:05:41.997 20:16:26 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 61663 ']' 00:05:41.997 20:16:26 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 61663 00:05:41.997 20:16:26 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:05:41.997 20:16:26 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:41.997 20:16:26 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61663 00:05:42.255 killing process with pid 61663 00:05:42.255 20:16:26 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:42.255 20:16:26 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:42.255 20:16:26 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61663' 00:05:42.255 20:16:26 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 61663 00:05:42.255 20:16:26 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 61663 00:05:43.624 20:16:27 blockdev_nvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:05:43.624 20:16:27 blockdev_nvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:05:43.624 20:16:27 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:05:43.624 20:16:27 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:43.624 20:16:27 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:43.624 ************************************ 00:05:43.624 START TEST bdev_hello_world 00:05:43.624 ************************************ 00:05:43.624 20:16:27 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:05:43.624 [2024-12-12 20:16:27.815553] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:05:43.624 [2024-12-12 20:16:27.815666] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61747 ] 00:05:43.882 [2024-12-12 20:16:27.976436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.882 [2024-12-12 20:16:28.074450] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.447 [2024-12-12 20:16:28.612879] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:05:44.447 [2024-12-12 20:16:28.613058] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:05:44.447 [2024-12-12 20:16:28.613084] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:05:44.447 [2024-12-12 20:16:28.615593] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:05:44.447 [2024-12-12 20:16:28.615988] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:05:44.447 [2024-12-12 20:16:28.616014] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:05:44.447 [2024-12-12 20:16:28.616155] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:05:44.447 00:05:44.447 [2024-12-12 20:16:28.616173] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:05:45.385 ************************************ 00:05:45.385 END TEST bdev_hello_world 00:05:45.385 ************************************ 00:05:45.385 00:05:45.385 real 0m1.573s 00:05:45.385 user 0m1.288s 00:05:45.385 sys 0m0.178s 00:05:45.385 20:16:29 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:45.385 20:16:29 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:05:45.385 20:16:29 blockdev_nvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:05:45.385 20:16:29 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:45.385 20:16:29 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:45.385 20:16:29 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:45.385 ************************************ 00:05:45.385 START TEST bdev_bounds 00:05:45.385 ************************************ 00:05:45.385 Process bdevio pid: 61783 00:05:45.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.385 20:16:29 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:05:45.385 20:16:29 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61783 00:05:45.385 20:16:29 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:05:45.385 20:16:29 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61783' 00:05:45.385 20:16:29 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61783 00:05:45.385 20:16:29 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 61783 ']' 00:05:45.385 20:16:29 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.385 20:16:29 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:05:45.385 20:16:29 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:45.385 20:16:29 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.385 20:16:29 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:45.385 20:16:29 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:05:45.385 [2024-12-12 20:16:29.426140] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:05:45.385 [2024-12-12 20:16:29.426261] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61783 ] 00:05:45.385 [2024-12-12 20:16:29.581786] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:45.643 [2024-12-12 20:16:29.683557] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:45.643 [2024-12-12 20:16:29.683823] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:45.643 [2024-12-12 20:16:29.683923] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.208 20:16:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:46.208 20:16:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:05:46.208 20:16:30 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:05:46.209 I/O targets: 00:05:46.209 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:05:46.209 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:05:46.209 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:05:46.209 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:05:46.209 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:05:46.209 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:05:46.209 00:05:46.209 00:05:46.209 CUnit - A unit testing framework for C - Version 2.1-3 00:05:46.209 http://cunit.sourceforge.net/ 00:05:46.209 00:05:46.209 00:05:46.209 Suite: bdevio tests on: Nvme3n1 00:05:46.209 Test: blockdev write read block ...passed 00:05:46.209 Test: blockdev write zeroes read block ...passed 00:05:46.209 Test: blockdev write zeroes read no split ...passed 00:05:46.209 Test: blockdev write zeroes read split ...passed 00:05:46.209 Test: blockdev write zeroes read split partial ...passed 00:05:46.209 Test: blockdev reset ...[2024-12-12 20:16:30.399668] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:05:46.209 passed 00:05:46.209 Test: blockdev write read 8 blocks ...[2024-12-12 20:16:30.402616] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:05:46.209 passed 00:05:46.209 Test: blockdev write read size > 128k ...passed 00:05:46.209 Test: blockdev write read invalid size ...passed 00:05:46.209 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:46.209 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:46.209 Test: blockdev write read max offset ...passed 00:05:46.209 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:46.209 Test: blockdev writev readv 8 blocks ...passed 00:05:46.209 Test: blockdev writev readv 30 x 1block ...passed 00:05:46.209 Test: blockdev writev readv block ...passed 00:05:46.209 Test: blockdev writev readv size > 128k ...passed 00:05:46.209 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:46.209 Test: blockdev comparev and writev ...[2024-12-12 20:16:30.411190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 00:05:46.209 Test: blockdev nvme passthru rw ...passed 00:05:46.209 Test: blockdev nvme passthru vendor specific ...SGL DATA BLOCK ADDRESS 0x2c100a000 len:0x1000 00:05:46.209 [2024-12-12 20:16:30.411332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:46.209 [2024-12-12 20:16:30.411845] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:05:46.209 [2024-12-12 20:16:30.411875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:46.209 passed 00:05:46.209 Test: blockdev nvme admin passthru ...passed 00:05:46.209 Test: blockdev copy ...passed 00:05:46.209 Suite: bdevio tests on: Nvme2n3 00:05:46.209 Test: blockdev write read block ...passed 00:05:46.209 Test: blockdev write zeroes read block ...passed 00:05:46.209 Test: blockdev write zeroes read no split ...passed 00:05:46.467 Test: blockdev write zeroes read split ...passed 00:05:46.467 Test: blockdev write zeroes read split partial ...passed 00:05:46.467 Test: blockdev reset ...[2024-12-12 20:16:30.468632] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:05:46.467 [2024-12-12 20:16:30.471750] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:05:46.467 Test: blockdev write read 8 blocks ...passed 00:05:46.467 Test: blockdev write read size > 128k ...uccessful. 00:05:46.467 passed 00:05:46.467 Test: blockdev write read invalid size ...passed 00:05:46.467 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:46.467 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:46.467 Test: blockdev write read max offset ...passed 00:05:46.467 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:46.467 Test: blockdev writev readv 8 blocks ...passed 00:05:46.467 Test: blockdev writev readv 30 x 1block ...passed 00:05:46.467 Test: blockdev writev readv block ...passed 00:05:46.467 Test: blockdev writev readv size > 128k ...passed 00:05:46.467 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:46.467 Test: blockdev comparev and writev ...[2024-12-12 20:16:30.477429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 passed 00:05:46.467 Test: blockdev nvme passthru rw ...passed 00:05:46.467 Test: blockdev nvme passthru vendor specific ...passed 00:05:46.467 Test: blockdev nvme admin passthru ...SGL DATA BLOCK ADDRESS 0x2c5006000 len:0x1000 00:05:46.467 [2024-12-12 20:16:30.477576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:46.467 [2024-12-12 20:16:30.478073] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:05:46.467 [2024-12-12 20:16:30.478100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:46.467 passed 00:05:46.467 Test: blockdev copy ...passed 00:05:46.467 Suite: bdevio tests on: Nvme2n2 00:05:46.467 Test: blockdev write read block ...passed 00:05:46.467 Test: blockdev write zeroes read block ...passed 00:05:46.467 Test: blockdev write zeroes read no split ...passed 00:05:46.467 Test: blockdev write zeroes read split ...passed 00:05:46.467 Test: blockdev write zeroes read split partial ...passed 00:05:46.467 Test: blockdev reset ...[2024-12-12 20:16:30.520671] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:05:46.467 [2024-12-12 20:16:30.523712] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:05:46.467 passed 00:05:46.467 Test: blockdev write read 8 blocks ...passed 00:05:46.467 Test: blockdev write read size > 128k ...passed 00:05:46.467 Test: blockdev write read invalid size ...passed 00:05:46.467 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:46.467 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:46.467 Test: blockdev write read max offset ...passed 00:05:46.467 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:46.468 Test: blockdev writev readv 8 blocks ...passed 00:05:46.468 Test: blockdev writev readv 30 x 1block ...passed 00:05:46.468 Test: blockdev writev readv block ...passed 00:05:46.468 Test: blockdev writev readv size > 128k ...passed 00:05:46.468 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:46.468 Test: blockdev comparev and writev ...[2024-12-12 20:16:30.529740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2da23c000 len:0x1000 00:05:46.468 [2024-12-12 20:16:30.529874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:46.468 passed 00:05:46.468 Test: blockdev nvme passthru rw ...passed 00:05:46.468 Test: blockdev nvme passthru vendor specific ...[2024-12-12 20:16:30.530589] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:05:46.468 [2024-12-12 20:16:30.530715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed 00:05:46.468 Test: blockdev nvme admin passthru ... sqhd:001c p:1 m:0 dnr:1 00:05:46.468 passed 00:05:46.468 Test: blockdev copy ...passed 00:05:46.468 Suite: bdevio tests on: Nvme2n1 00:05:46.468 Test: blockdev write read block ...passed 00:05:46.468 Test: blockdev write zeroes read block ...passed 00:05:46.468 Test: blockdev write zeroes read no split ...passed 00:05:46.468 Test: blockdev write zeroes read split ...passed 00:05:46.468 Test: blockdev write zeroes read split partial ...passed 00:05:46.468 Test: blockdev reset ...[2024-12-12 20:16:30.571163] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:05:46.468 [2024-12-12 20:16:30.573989] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:05:46.468 passed 00:05:46.468 Test: blockdev write read 8 blocks ...passed 00:05:46.468 Test: blockdev write read size > 128k ...passed 00:05:46.468 Test: blockdev write read invalid size ...passed 00:05:46.468 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:46.468 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:46.468 Test: blockdev write read max offset ...passed 00:05:46.468 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:46.468 Test: blockdev writev readv 8 blocks ...passed 00:05:46.468 Test: blockdev writev readv 30 x 1block ...passed 00:05:46.468 Test: blockdev writev readv block ...passed 00:05:46.468 Test: blockdev writev readv size > 128k ...passed 00:05:46.468 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:46.468 Test: blockdev comparev and writev ...[2024-12-12 20:16:30.580471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2da238000 len:0x1000 00:05:46.468 [2024-12-12 20:16:30.580619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:46.468 passed 00:05:46.468 Test: blockdev nvme passthru rw ...passed 00:05:46.468 Test: blockdev nvme passthru vendor specific ...[2024-12-12 20:16:30.581315] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:05:46.468 [2024-12-12 20:16:30.581437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed 00:05:46.468 Test: blockdev nvme admin passthru ... sqhd:001c p:1 m:0 dnr:1 00:05:46.468 passed 00:05:46.468 Test: blockdev copy ...passed 00:05:46.468 Suite: bdevio tests on: Nvme1n1 00:05:46.468 Test: blockdev write read block ...passed 00:05:46.468 Test: blockdev write zeroes read block ...passed 00:05:46.468 Test: blockdev write zeroes read no split ...passed 00:05:46.468 Test: blockdev write zeroes read split ...passed 00:05:46.468 Test: blockdev write zeroes read split partial ...passed 00:05:46.468 Test: blockdev reset ...[2024-12-12 20:16:30.635982] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:05:46.468 [2024-12-12 20:16:30.638611] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:05:46.468 passed 00:05:46.468 Test: blockdev write read 8 blocks ...passed 00:05:46.468 Test: blockdev write read size > 128k ...passed 00:05:46.468 Test: blockdev write read invalid size ...passed 00:05:46.468 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:46.468 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:46.468 Test: blockdev write read max offset ...passed 00:05:46.468 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:46.468 Test: blockdev writev readv 8 blocks ...passed 00:05:46.468 Test: blockdev writev readv 30 x 1block ...passed 00:05:46.468 Test: blockdev writev readv block ...passed 00:05:46.468 Test: blockdev writev readv size > 128k ...passed 00:05:46.468 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:46.468 Test: blockdev comparev and writev ...[2024-12-12 20:16:30.644604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2da234000 len:0x1000 00:05:46.468 [2024-12-12 20:16:30.644646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:46.468 passed 00:05:46.468 Test: blockdev nvme passthru rw ...passed 00:05:46.468 Test: blockdev nvme passthru vendor specific ...passed 00:05:46.468 Test: blockdev nvme admin passthru ...[2024-12-12 20:16:30.645170] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:05:46.468 [2024-12-12 20:16:30.645200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:46.468 passed 00:05:46.468 Test: blockdev copy ...passed 00:05:46.468 Suite: bdevio tests on: Nvme0n1 00:05:46.468 Test: blockdev write read block ...passed 00:05:46.468 Test: blockdev write zeroes read block ...passed 00:05:46.468 Test: blockdev write zeroes read no split ...passed 00:05:46.468 Test: blockdev write zeroes read split ...passed 00:05:46.468 Test: blockdev write zeroes read split partial ...passed 00:05:46.468 Test: blockdev reset ...[2024-12-12 20:16:30.687041] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:05:46.468 [2024-12-12 20:16:30.690721] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:05:46.468 passed 00:05:46.468 Test: blockdev write read 8 blocks ...passed 00:05:46.468 Test: blockdev write read size > 128k ...passed 00:05:46.468 Test: blockdev write read invalid size ...passed 00:05:46.468 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:46.468 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:46.468 Test: blockdev write read max offset ...passed 00:05:46.468 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:46.468 Test: blockdev writev readv 8 blocks ...passed 00:05:46.728 Test: blockdev writev readv 30 x 1block ...passed 00:05:46.728 Test: blockdev writev readv block ...passed 00:05:46.728 Test: blockdev writev readv size > 128k ...passed 00:05:46.728 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:46.728 Test: blockdev comparev and writev ...passed 00:05:46.728 Test: blockdev nvme passthru rw ...[2024-12-12 20:16:30.697118] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:05:46.728 separate metadata which is not supported yet. 00:05:46.728 passed 00:05:46.728 Test: blockdev nvme passthru vendor specific ...[2024-12-12 20:16:30.697618] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:05:46.728 [2024-12-12 20:16:30.697740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:05:46.728 passed 00:05:46.728 Test: blockdev nvme admin passthru ...passed 00:05:46.728 Test: blockdev copy ...passed 00:05:46.728 00:05:46.728 Run Summary: Type Total Ran Passed Failed Inactive 00:05:46.728 suites 6 6 n/a 0 0 00:05:46.728 tests 138 138 138 0 0 00:05:46.728 asserts 893 893 893 0 n/a 00:05:46.728 00:05:46.728 Elapsed time = 0.933 seconds 00:05:46.728 0 00:05:46.728 20:16:30 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61783 00:05:46.728 20:16:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 61783 ']' 00:05:46.728 20:16:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 61783 00:05:46.728 20:16:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:05:46.728 20:16:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:46.728 20:16:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61783 00:05:46.728 killing process with pid 61783 00:05:46.728 20:16:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:46.728 20:16:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:46.728 20:16:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61783' 00:05:46.729 20:16:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 61783 00:05:46.729 20:16:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 61783 00:05:47.369 20:16:31 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:05:47.369 00:05:47.369 real 0m2.048s 00:05:47.369 user 0m5.220s 00:05:47.369 sys 0m0.290s 00:05:47.369 20:16:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:47.369 ************************************ 00:05:47.369 END TEST bdev_bounds 00:05:47.369 ************************************ 00:05:47.369 20:16:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:05:47.369 20:16:31 blockdev_nvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:05:47.369 20:16:31 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:47.369 20:16:31 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:47.369 20:16:31 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:47.369 ************************************ 00:05:47.369 START TEST bdev_nbd 00:05:47.369 ************************************ 00:05:47.369 20:16:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:05:47.369 20:16:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:05:47.369 20:16:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:05:47.369 20:16:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.369 20:16:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:05:47.369 20:16:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:47.369 20:16:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:05:47.369 20:16:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:05:47.369 20:16:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:05:47.369 20:16:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:05:47.369 20:16:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:05:47.369 20:16:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:05:47.369 20:16:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:05:47.369 20:16:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:05:47.369 20:16:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:47.369 20:16:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:05:47.369 20:16:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61843 00:05:47.369 20:16:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:05:47.369 20:16:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61843 /var/tmp/spdk-nbd.sock 00:05:47.369 20:16:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 61843 ']' 00:05:47.369 20:16:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:47.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:47.369 20:16:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:47.369 20:16:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:47.369 20:16:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:47.369 20:16:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:05:47.369 20:16:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:05:47.369 [2024-12-12 20:16:31.513852] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:05:47.369 [2024-12-12 20:16:31.513966] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:47.628 [2024-12-12 20:16:31.670015] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.628 [2024-12-12 20:16:31.767567] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.195 20:16:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:48.195 20:16:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:05:48.195 20:16:32 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:05:48.195 20:16:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.195 20:16:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:48.195 20:16:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:05:48.195 20:16:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:05:48.195 20:16:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.195 20:16:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:48.195 20:16:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:05:48.195 20:16:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:05:48.195 20:16:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:05:48.195 20:16:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:05:48.195 20:16:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:48.195 20:16:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:05:48.453 20:16:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:05:48.453 20:16:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:05:48.453 20:16:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:05:48.453 20:16:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:48.453 20:16:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:48.453 20:16:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:48.453 20:16:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:48.453 20:16:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:48.453 20:16:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:48.453 20:16:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:48.453 20:16:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:48.453 20:16:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:48.453 1+0 records in 00:05:48.453 1+0 records out 00:05:48.454 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000303164 s, 13.5 MB/s 00:05:48.454 20:16:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:48.454 20:16:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:48.454 20:16:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:48.715 20:16:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:48.715 20:16:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:48.715 20:16:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:48.715 20:16:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:48.715 20:16:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:05:48.715 20:16:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:05:48.715 20:16:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:05:48.715 20:16:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:05:48.715 20:16:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:48.715 20:16:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:48.715 20:16:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:48.715 20:16:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:48.715 20:16:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:48.715 20:16:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:48.715 20:16:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:48.715 20:16:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:48.715 20:16:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:48.976 1+0 records in 00:05:48.976 1+0 records out 00:05:48.976 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000388235 s, 10.6 MB/s 00:05:48.976 20:16:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:48.976 20:16:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:48.976 20:16:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:48.976 20:16:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:48.976 20:16:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:48.976 20:16:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:48.976 20:16:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:48.976 20:16:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:05:48.976 20:16:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:05:48.976 20:16:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:05:48.976 20:16:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:05:48.976 20:16:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:05:48.976 20:16:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:48.976 20:16:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:48.976 20:16:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:48.976 20:16:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:05:48.976 20:16:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:48.976 20:16:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:48.976 20:16:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:48.976 20:16:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:48.976 1+0 records in 00:05:48.976 1+0 records out 00:05:48.976 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000381589 s, 10.7 MB/s 00:05:48.976 20:16:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:48.976 20:16:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:48.976 20:16:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:48.976 20:16:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:48.976 20:16:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:48.977 20:16:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:48.977 20:16:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:48.977 20:16:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:05:49.238 20:16:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:05:49.238 20:16:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:05:49.238 20:16:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:05:49.238 20:16:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:05:49.238 20:16:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:49.238 20:16:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:49.238 20:16:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:49.238 20:16:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:05:49.238 20:16:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:49.238 20:16:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:49.238 20:16:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:49.238 20:16:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:49.238 1+0 records in 00:05:49.238 1+0 records out 00:05:49.238 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000420833 s, 9.7 MB/s 00:05:49.238 20:16:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:49.238 20:16:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:49.238 20:16:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:49.238 20:16:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:49.238 20:16:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:49.238 20:16:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:49.238 20:16:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:49.238 20:16:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:05:49.500 20:16:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:05:49.500 20:16:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:05:49.500 20:16:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:05:49.500 20:16:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:05:49.500 20:16:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:49.500 20:16:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:49.500 20:16:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:49.500 20:16:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:05:49.500 20:16:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:49.500 20:16:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:49.500 20:16:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:49.500 20:16:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:49.500 1+0 records in 00:05:49.500 1+0 records out 00:05:49.500 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000466654 s, 8.8 MB/s 00:05:49.500 20:16:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:49.500 20:16:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:49.500 20:16:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:49.500 20:16:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:49.500 20:16:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:49.500 20:16:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:49.500 20:16:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:49.500 20:16:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:05:49.760 20:16:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:05:49.760 20:16:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:05:49.760 20:16:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:05:49.760 20:16:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:05:49.760 20:16:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:49.760 20:16:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:49.760 20:16:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:49.760 20:16:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:05:49.760 20:16:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:49.760 20:16:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:49.760 20:16:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:49.760 20:16:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:49.760 1+0 records in 00:05:49.760 1+0 records out 00:05:49.760 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000416126 s, 9.8 MB/s 00:05:49.760 20:16:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:49.760 20:16:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:49.760 20:16:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:49.760 20:16:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:49.760 20:16:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:49.760 20:16:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:49.760 20:16:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:49.760 20:16:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:50.021 20:16:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:05:50.021 { 00:05:50.021 "nbd_device": "/dev/nbd0", 00:05:50.021 "bdev_name": "Nvme0n1" 00:05:50.021 }, 00:05:50.021 { 00:05:50.021 "nbd_device": "/dev/nbd1", 00:05:50.021 "bdev_name": "Nvme1n1" 00:05:50.021 }, 00:05:50.021 { 00:05:50.021 "nbd_device": "/dev/nbd2", 00:05:50.021 "bdev_name": "Nvme2n1" 00:05:50.021 }, 00:05:50.021 { 00:05:50.021 "nbd_device": "/dev/nbd3", 00:05:50.021 "bdev_name": "Nvme2n2" 00:05:50.021 }, 00:05:50.021 { 00:05:50.021 "nbd_device": "/dev/nbd4", 00:05:50.021 "bdev_name": "Nvme2n3" 00:05:50.021 }, 00:05:50.021 { 00:05:50.021 "nbd_device": "/dev/nbd5", 00:05:50.021 "bdev_name": "Nvme3n1" 00:05:50.021 } 00:05:50.021 ]' 00:05:50.021 20:16:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:05:50.021 20:16:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:05:50.021 { 00:05:50.021 "nbd_device": "/dev/nbd0", 00:05:50.021 "bdev_name": "Nvme0n1" 00:05:50.021 }, 00:05:50.021 { 00:05:50.021 "nbd_device": "/dev/nbd1", 00:05:50.021 "bdev_name": "Nvme1n1" 00:05:50.021 }, 00:05:50.021 { 00:05:50.021 "nbd_device": "/dev/nbd2", 00:05:50.021 "bdev_name": "Nvme2n1" 00:05:50.021 }, 00:05:50.021 { 00:05:50.021 "nbd_device": "/dev/nbd3", 00:05:50.021 "bdev_name": "Nvme2n2" 00:05:50.021 }, 00:05:50.021 { 00:05:50.021 "nbd_device": "/dev/nbd4", 00:05:50.021 "bdev_name": "Nvme2n3" 00:05:50.021 }, 00:05:50.021 { 00:05:50.021 "nbd_device": "/dev/nbd5", 00:05:50.021 "bdev_name": "Nvme3n1" 00:05:50.021 } 00:05:50.021 ]' 00:05:50.021 20:16:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:05:50.021 20:16:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:05:50.021 20:16:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.021 20:16:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:05:50.021 20:16:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:50.021 20:16:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:05:50.021 20:16:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:50.021 20:16:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:50.281 20:16:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:50.281 20:16:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:50.281 20:16:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:50.281 20:16:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:50.281 20:16:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:50.281 20:16:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:50.281 20:16:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:50.281 20:16:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:50.281 20:16:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:50.281 20:16:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:50.541 20:16:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:50.541 20:16:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:50.541 20:16:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:50.541 20:16:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:50.541 20:16:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:50.541 20:16:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:50.541 20:16:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:50.541 20:16:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:50.541 20:16:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:50.541 20:16:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:05:50.541 20:16:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:05:50.541 20:16:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:05:50.541 20:16:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:05:50.541 20:16:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:50.541 20:16:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:50.541 20:16:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:05:50.541 20:16:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:50.541 20:16:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:50.541 20:16:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:50.541 20:16:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:05:50.802 20:16:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:05:50.802 20:16:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:05:50.802 20:16:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:05:50.802 20:16:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:50.802 20:16:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:50.802 20:16:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:05:50.802 20:16:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:50.802 20:16:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:50.802 20:16:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:50.802 20:16:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:05:51.062 20:16:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:05:51.062 20:16:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:05:51.062 20:16:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:05:51.062 20:16:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:51.062 20:16:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:51.062 20:16:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:05:51.062 20:16:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:51.062 20:16:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:51.062 20:16:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:51.062 20:16:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:05:51.323 20:16:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:05:51.323 20:16:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:05:51.323 20:16:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:05:51.323 20:16:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:51.323 20:16:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:51.323 20:16:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:05:51.323 20:16:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:51.323 20:16:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:51.323 20:16:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:51.323 20:16:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.323 20:16:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:51.584 20:16:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:51.584 20:16:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:51.584 20:16:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:51.584 20:16:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:51.584 20:16:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:05:51.584 20:16:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:51.584 20:16:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:05:51.584 20:16:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:05:51.584 20:16:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:05:51.584 20:16:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:05:51.584 20:16:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:05:51.584 20:16:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:05:51.584 20:16:35 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:05:51.584 20:16:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.584 20:16:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:51.584 20:16:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:51.584 20:16:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:05:51.584 20:16:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:51.584 20:16:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:05:51.584 20:16:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.584 20:16:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:51.584 20:16:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:51.584 20:16:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:05:51.584 20:16:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:51.584 20:16:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:05:51.584 20:16:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:51.584 20:16:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:51.584 20:16:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:05:51.844 /dev/nbd0 00:05:51.844 20:16:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:51.844 20:16:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:51.844 20:16:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:51.844 20:16:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:51.844 20:16:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:51.844 20:16:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:51.844 20:16:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:51.844 20:16:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:51.844 20:16:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:51.844 20:16:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:51.844 20:16:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:51.844 1+0 records in 00:05:51.844 1+0 records out 00:05:51.844 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00050753 s, 8.1 MB/s 00:05:51.844 20:16:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:51.844 20:16:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:51.844 20:16:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:51.844 20:16:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:51.844 20:16:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:51.844 20:16:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:51.844 20:16:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:51.844 20:16:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:05:52.105 /dev/nbd1 00:05:52.105 20:16:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:52.105 20:16:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:52.105 20:16:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:52.105 20:16:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:52.105 20:16:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:52.105 20:16:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:52.105 20:16:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:52.105 20:16:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:52.105 20:16:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:52.105 20:16:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:52.105 20:16:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:52.105 1+0 records in 00:05:52.105 1+0 records out 00:05:52.105 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000409071 s, 10.0 MB/s 00:05:52.105 20:16:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:52.105 20:16:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:52.105 20:16:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:52.105 20:16:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:52.105 20:16:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:52.105 20:16:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:52.105 20:16:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:52.105 20:16:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:05:52.105 /dev/nbd10 00:05:52.367 20:16:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:05:52.367 20:16:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:05:52.367 20:16:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:05:52.367 20:16:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:52.367 20:16:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:52.367 20:16:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:52.367 20:16:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:05:52.367 20:16:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:52.367 20:16:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:52.367 20:16:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:52.367 20:16:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:52.367 1+0 records in 00:05:52.367 1+0 records out 00:05:52.367 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00034347 s, 11.9 MB/s 00:05:52.367 20:16:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:52.367 20:16:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:52.367 20:16:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:52.367 20:16:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:52.367 20:16:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:52.367 20:16:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:52.367 20:16:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:52.367 20:16:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:05:52.367 /dev/nbd11 00:05:52.628 20:16:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:05:52.628 20:16:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:05:52.628 20:16:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:05:52.628 20:16:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:52.628 20:16:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:52.628 20:16:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:52.628 20:16:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:05:52.628 20:16:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:52.628 20:16:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:52.628 20:16:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:52.628 20:16:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:52.628 1+0 records in 00:05:52.628 1+0 records out 00:05:52.628 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0003358 s, 12.2 MB/s 00:05:52.628 20:16:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:52.628 20:16:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:52.628 20:16:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:52.628 20:16:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:52.628 20:16:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:52.628 20:16:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:52.628 20:16:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:52.628 20:16:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:05:52.628 /dev/nbd12 00:05:52.628 20:16:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:05:52.628 20:16:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:05:52.628 20:16:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:05:52.628 20:16:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:52.628 20:16:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:52.628 20:16:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:52.628 20:16:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:05:52.628 20:16:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:52.628 20:16:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:52.628 20:16:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:52.628 20:16:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:52.628 1+0 records in 00:05:52.628 1+0 records out 00:05:52.628 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000400007 s, 10.2 MB/s 00:05:52.628 20:16:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:52.890 20:16:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:52.890 20:16:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:52.890 20:16:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:52.890 20:16:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:52.890 20:16:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:52.890 20:16:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:52.890 20:16:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:05:52.890 /dev/nbd13 00:05:52.890 20:16:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:05:52.890 20:16:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:05:52.890 20:16:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:05:52.890 20:16:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:52.890 20:16:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:52.890 20:16:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:52.890 20:16:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:05:52.890 20:16:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:52.890 20:16:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:52.890 20:16:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:52.890 20:16:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:52.890 1+0 records in 00:05:52.890 1+0 records out 00:05:52.890 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000594101 s, 6.9 MB/s 00:05:52.890 20:16:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:52.890 20:16:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:52.890 20:16:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:52.890 20:16:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:52.890 20:16:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:52.890 20:16:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:52.890 20:16:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:52.890 20:16:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:52.890 20:16:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.890 20:16:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:53.151 20:16:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:53.151 { 00:05:53.151 "nbd_device": "/dev/nbd0", 00:05:53.151 "bdev_name": "Nvme0n1" 00:05:53.151 }, 00:05:53.151 { 00:05:53.151 "nbd_device": "/dev/nbd1", 00:05:53.151 "bdev_name": "Nvme1n1" 00:05:53.151 }, 00:05:53.151 { 00:05:53.151 "nbd_device": "/dev/nbd10", 00:05:53.151 "bdev_name": "Nvme2n1" 00:05:53.151 }, 00:05:53.151 { 00:05:53.151 "nbd_device": "/dev/nbd11", 00:05:53.151 "bdev_name": "Nvme2n2" 00:05:53.151 }, 00:05:53.151 { 00:05:53.151 "nbd_device": "/dev/nbd12", 00:05:53.151 "bdev_name": "Nvme2n3" 00:05:53.151 }, 00:05:53.151 { 00:05:53.151 "nbd_device": "/dev/nbd13", 00:05:53.151 "bdev_name": "Nvme3n1" 00:05:53.151 } 00:05:53.151 ]' 00:05:53.151 20:16:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:53.151 20:16:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:53.151 { 00:05:53.151 "nbd_device": "/dev/nbd0", 00:05:53.151 "bdev_name": "Nvme0n1" 00:05:53.151 }, 00:05:53.151 { 00:05:53.151 "nbd_device": "/dev/nbd1", 00:05:53.151 "bdev_name": "Nvme1n1" 00:05:53.151 }, 00:05:53.151 { 00:05:53.151 "nbd_device": "/dev/nbd10", 00:05:53.151 "bdev_name": "Nvme2n1" 00:05:53.151 }, 00:05:53.151 { 00:05:53.151 "nbd_device": "/dev/nbd11", 00:05:53.151 "bdev_name": "Nvme2n2" 00:05:53.151 }, 00:05:53.151 { 00:05:53.151 "nbd_device": "/dev/nbd12", 00:05:53.151 "bdev_name": "Nvme2n3" 00:05:53.151 }, 00:05:53.151 { 00:05:53.151 "nbd_device": "/dev/nbd13", 00:05:53.151 "bdev_name": "Nvme3n1" 00:05:53.151 } 00:05:53.151 ]' 00:05:53.151 20:16:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:53.151 /dev/nbd1 00:05:53.151 /dev/nbd10 00:05:53.151 /dev/nbd11 00:05:53.151 /dev/nbd12 00:05:53.151 /dev/nbd13' 00:05:53.151 20:16:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:53.151 20:16:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:53.151 /dev/nbd1 00:05:53.151 /dev/nbd10 00:05:53.151 /dev/nbd11 00:05:53.151 /dev/nbd12 00:05:53.151 /dev/nbd13' 00:05:53.151 20:16:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:05:53.151 20:16:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:05:53.151 20:16:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:05:53.151 20:16:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:05:53.151 20:16:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:05:53.151 20:16:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:05:53.151 20:16:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:53.151 20:16:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:53.151 20:16:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:05:53.151 20:16:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:53.151 20:16:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:05:53.151 256+0 records in 00:05:53.151 256+0 records out 00:05:53.151 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00578913 s, 181 MB/s 00:05:53.151 20:16:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:53.151 20:16:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:53.412 256+0 records in 00:05:53.412 256+0 records out 00:05:53.412 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0613265 s, 17.1 MB/s 00:05:53.412 20:16:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:53.412 20:16:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:53.412 256+0 records in 00:05:53.412 256+0 records out 00:05:53.412 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0637911 s, 16.4 MB/s 00:05:53.412 20:16:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:53.412 20:16:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:05:53.412 256+0 records in 00:05:53.412 256+0 records out 00:05:53.412 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0781154 s, 13.4 MB/s 00:05:53.412 20:16:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:53.412 20:16:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:05:53.412 256+0 records in 00:05:53.412 256+0 records out 00:05:53.412 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.064431 s, 16.3 MB/s 00:05:53.412 20:16:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:53.412 20:16:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:05:53.672 256+0 records in 00:05:53.672 256+0 records out 00:05:53.672 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0640353 s, 16.4 MB/s 00:05:53.672 20:16:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:53.672 20:16:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:05:53.672 256+0 records in 00:05:53.672 256+0 records out 00:05:53.672 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.063301 s, 16.6 MB/s 00:05:53.672 20:16:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:05:53.672 20:16:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:05:53.672 20:16:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:53.672 20:16:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:53.672 20:16:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:05:53.672 20:16:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:53.672 20:16:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:53.672 20:16:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:53.672 20:16:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:05:53.672 20:16:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:53.672 20:16:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:05:53.672 20:16:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:53.672 20:16:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:05:53.673 20:16:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:53.673 20:16:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:05:53.673 20:16:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:53.673 20:16:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:05:53.673 20:16:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:53.673 20:16:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:05:53.673 20:16:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:05:53.673 20:16:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:05:53.673 20:16:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.673 20:16:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:05:53.673 20:16:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:53.673 20:16:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:05:53.673 20:16:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:53.673 20:16:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:53.937 20:16:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:53.937 20:16:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:53.937 20:16:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:53.937 20:16:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:53.937 20:16:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:53.937 20:16:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:53.937 20:16:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:53.937 20:16:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:53.937 20:16:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:53.937 20:16:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:54.199 20:16:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:54.199 20:16:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:54.199 20:16:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:54.199 20:16:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:54.199 20:16:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:54.199 20:16:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:54.199 20:16:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:54.199 20:16:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:54.199 20:16:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:54.199 20:16:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:05:54.199 20:16:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:05:54.199 20:16:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:05:54.199 20:16:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:05:54.199 20:16:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:54.199 20:16:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:54.199 20:16:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:05:54.199 20:16:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:54.199 20:16:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:54.199 20:16:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:54.199 20:16:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:05:54.460 20:16:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:05:54.460 20:16:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:05:54.460 20:16:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:05:54.460 20:16:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:54.460 20:16:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:54.460 20:16:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:05:54.460 20:16:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:54.460 20:16:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:54.460 20:16:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:54.460 20:16:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:05:54.722 20:16:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:05:54.722 20:16:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:05:54.722 20:16:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:05:54.722 20:16:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:54.722 20:16:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:54.722 20:16:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:05:54.722 20:16:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:54.722 20:16:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:54.722 20:16:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:54.722 20:16:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:05:54.983 20:16:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:05:54.983 20:16:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:05:54.983 20:16:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:05:54.983 20:16:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:54.983 20:16:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:54.983 20:16:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:05:54.983 20:16:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:54.983 20:16:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:54.983 20:16:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:54.983 20:16:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.983 20:16:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:55.244 20:16:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:55.244 20:16:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:55.244 20:16:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:55.244 20:16:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:55.244 20:16:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:05:55.244 20:16:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:55.244 20:16:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:05:55.244 20:16:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:05:55.244 20:16:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:05:55.244 20:16:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:05:55.244 20:16:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:55.244 20:16:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:05:55.244 20:16:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:05:55.244 20:16:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.244 20:16:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:05:55.244 20:16:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:05:55.505 malloc_lvol_verify 00:05:55.505 20:16:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:05:55.505 1058c653-8c43-41b7-8fb8-0ddcd64e0963 00:05:55.505 20:16:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:05:55.765 550ebb50-a39b-45c7-9c60-918b7731f968 00:05:55.765 20:16:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:05:56.026 /dev/nbd0 00:05:56.026 20:16:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:05:56.026 20:16:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:05:56.026 20:16:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:05:56.026 20:16:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:05:56.026 20:16:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:05:56.026 mke2fs 1.47.0 (5-Feb-2023) 00:05:56.026 Discarding device blocks: 0/4096 done 00:05:56.026 Creating filesystem with 4096 1k blocks and 1024 inodes 00:05:56.026 00:05:56.026 Allocating group tables: 0/1 done 00:05:56.026 Writing inode tables: 0/1 done 00:05:56.026 Creating journal (1024 blocks): done 00:05:56.026 Writing superblocks and filesystem accounting information: 0/1 done 00:05:56.026 00:05:56.026 20:16:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:05:56.026 20:16:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.026 20:16:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:05:56.026 20:16:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:56.026 20:16:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:05:56.026 20:16:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:56.026 20:16:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:56.287 20:16:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:56.287 20:16:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:56.287 20:16:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:56.287 20:16:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:56.287 20:16:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:56.287 20:16:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:56.287 20:16:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:56.287 20:16:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:56.287 20:16:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61843 00:05:56.287 20:16:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 61843 ']' 00:05:56.287 20:16:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 61843 00:05:56.287 20:16:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:05:56.287 20:16:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:56.287 20:16:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61843 00:05:56.287 20:16:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:56.287 20:16:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:56.287 killing process with pid 61843 00:05:56.287 20:16:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61843' 00:05:56.287 20:16:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 61843 00:05:56.287 20:16:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 61843 00:05:56.859 20:16:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:05:56.859 00:05:56.859 real 0m9.583s 00:05:56.859 user 0m13.929s 00:05:56.859 sys 0m3.030s 00:05:56.859 20:16:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:56.859 20:16:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:05:56.859 ************************************ 00:05:56.859 END TEST bdev_nbd 00:05:56.859 ************************************ 00:05:56.859 20:16:41 blockdev_nvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:05:56.859 skipping fio tests on NVMe due to multi-ns failures. 00:05:56.859 20:16:41 blockdev_nvme -- bdev/blockdev.sh@801 -- # '[' nvme = nvme ']' 00:05:56.859 20:16:41 blockdev_nvme -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:05:56.859 20:16:41 blockdev_nvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:05:56.859 20:16:41 blockdev_nvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:05:56.859 20:16:41 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:05:56.859 20:16:41 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:56.859 20:16:41 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:56.859 ************************************ 00:05:56.859 START TEST bdev_verify 00:05:56.859 ************************************ 00:05:56.859 20:16:41 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:05:57.120 [2024-12-12 20:16:41.138279] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:05:57.120 [2024-12-12 20:16:41.138394] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62212 ] 00:05:57.120 [2024-12-12 20:16:41.293777] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:57.381 [2024-12-12 20:16:41.377744] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.381 [2024-12-12 20:16:41.377936] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.952 Running I/O for 5 seconds... 00:05:59.831 23168.00 IOPS, 90.50 MiB/s [2024-12-12T20:16:45.436Z] 23552.00 IOPS, 92.00 MiB/s [2024-12-12T20:16:46.378Z] 24490.67 IOPS, 95.67 MiB/s [2024-12-12T20:16:47.323Z] 24896.00 IOPS, 97.25 MiB/s [2024-12-12T20:16:47.323Z] 24934.40 IOPS, 97.40 MiB/s 00:06:03.095 Latency(us) 00:06:03.095 [2024-12-12T20:16:47.323Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:03.095 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:03.095 Verification LBA range: start 0x0 length 0xbd0bd 00:06:03.095 Nvme0n1 : 5.05 2078.96 8.12 0.00 0.00 61425.59 11090.71 65737.65 00:06:03.095 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:03.095 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:06:03.095 Nvme0n1 : 5.04 2030.42 7.93 0.00 0.00 62853.10 12401.43 68157.44 00:06:03.095 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:03.095 Verification LBA range: start 0x0 length 0xa0000 00:06:03.095 Nvme1n1 : 5.05 2077.70 8.12 0.00 0.00 61373.71 12502.25 60091.47 00:06:03.095 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:03.095 Verification LBA range: start 0xa0000 length 0xa0000 00:06:03.095 Nvme1n1 : 5.04 2029.86 7.93 0.00 0.00 62748.82 16131.94 57671.68 00:06:03.095 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:03.095 Verification LBA range: start 0x0 length 0x80000 00:06:03.095 Nvme2n1 : 5.05 2077.13 8.11 0.00 0.00 61241.29 14014.62 54445.29 00:06:03.095 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:03.095 Verification LBA range: start 0x80000 length 0x80000 00:06:03.095 Nvme2n1 : 5.05 2029.30 7.93 0.00 0.00 62615.67 16031.11 56865.08 00:06:03.095 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:03.095 Verification LBA range: start 0x0 length 0x80000 00:06:03.095 Nvme2n2 : 5.05 2076.52 8.11 0.00 0.00 61128.95 14317.10 55655.19 00:06:03.095 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:03.095 Verification LBA range: start 0x80000 length 0x80000 00:06:03.095 Nvme2n2 : 5.06 2036.59 7.96 0.00 0.00 62252.19 3881.75 55251.89 00:06:03.095 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:03.095 Verification LBA range: start 0x0 length 0x80000 00:06:03.095 Nvme2n3 : 5.07 2084.14 8.14 0.00 0.00 60787.91 4335.46 57268.38 00:06:03.095 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:03.095 Verification LBA range: start 0x80000 length 0x80000 00:06:03.095 Nvme2n3 : 5.07 2044.56 7.99 0.00 0.00 61943.82 8922.98 56461.78 00:06:03.095 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:03.095 Verification LBA range: start 0x0 length 0x20000 00:06:03.095 Nvme3n1 : 5.08 2092.71 8.17 0.00 0.00 60497.94 7158.55 58881.58 00:06:03.095 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:03.095 Verification LBA range: start 0x20000 length 0x20000 00:06:03.095 Nvme3n1 : 5.07 2044.03 7.98 0.00 0.00 61866.22 7309.78 59688.17 00:06:03.095 [2024-12-12T20:16:47.323Z] =================================================================================================================== 00:06:03.095 [2024-12-12T20:16:47.323Z] Total : 24701.93 96.49 0.00 0.00 61718.81 3881.75 68157.44 00:06:04.484 00:06:04.484 real 0m7.381s 00:06:04.484 user 0m13.865s 00:06:04.484 sys 0m0.206s 00:06:04.484 20:16:48 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.484 ************************************ 00:06:04.484 END TEST bdev_verify 00:06:04.484 ************************************ 00:06:04.484 20:16:48 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:06:04.484 20:16:48 blockdev_nvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:04.484 20:16:48 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:04.484 20:16:48 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:04.484 20:16:48 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:04.484 ************************************ 00:06:04.484 START TEST bdev_verify_big_io 00:06:04.484 ************************************ 00:06:04.484 20:16:48 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:04.484 [2024-12-12 20:16:48.565930] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:06:04.484 [2024-12-12 20:16:48.566171] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62310 ] 00:06:04.746 [2024-12-12 20:16:48.726140] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:04.746 [2024-12-12 20:16:48.847761] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.746 [2024-12-12 20:16:48.847863] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.318 Running I/O for 5 seconds... 00:06:11.440 1690.00 IOPS, 105.62 MiB/s [2024-12-12T20:16:55.927Z] 2499.50 IOPS, 156.22 MiB/s [2024-12-12T20:16:56.187Z] 2901.67 IOPS, 181.35 MiB/s 00:06:11.959 Latency(us) 00:06:11.959 [2024-12-12T20:16:56.187Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:11.959 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:11.959 Verification LBA range: start 0x0 length 0xbd0b 00:06:11.959 Nvme0n1 : 5.78 113.14 7.07 0.00 0.00 1062589.16 14216.27 1077613.49 00:06:11.959 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:11.959 Verification LBA range: start 0xbd0b length 0xbd0b 00:06:11.959 Nvme0n1 : 5.93 86.41 5.40 0.00 0.00 1407666.61 14014.62 1742249.35 00:06:11.959 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:11.959 Verification LBA range: start 0x0 length 0xa000 00:06:11.959 Nvme1n1 : 5.78 115.51 7.22 0.00 0.00 1020442.91 101631.21 916294.10 00:06:11.959 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:11.959 Verification LBA range: start 0xa000 length 0xa000 00:06:11.959 Nvme1n1 : 5.93 86.38 5.40 0.00 0.00 1326661.32 129055.51 1406705.03 00:06:11.959 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:11.959 Verification LBA range: start 0x0 length 0x8000 00:06:11.959 Nvme2n1 : 5.93 125.73 7.86 0.00 0.00 929769.46 37708.41 942105.21 00:06:11.959 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:11.959 Verification LBA range: start 0x8000 length 0x8000 00:06:11.959 Nvme2n1 : 5.98 96.28 6.02 0.00 0.00 1140315.11 20769.87 1193763.45 00:06:11.959 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:11.959 Verification LBA range: start 0x0 length 0x8000 00:06:11.959 Nvme2n2 : 5.93 126.21 7.89 0.00 0.00 897182.79 37910.06 967916.31 00:06:11.959 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:11.959 Verification LBA range: start 0x8000 length 0x8000 00:06:11.959 Nvme2n2 : 6.07 115.89 7.24 0.00 0.00 904382.05 13006.38 1226027.32 00:06:11.959 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:11.959 Verification LBA range: start 0x0 length 0x8000 00:06:11.959 Nvme2n3 : 5.94 129.66 8.10 0.00 0.00 848608.18 47185.92 1019538.51 00:06:11.959 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:11.959 Verification LBA range: start 0x8000 length 0x8000 00:06:11.959 Nvme2n3 : 6.23 160.81 10.05 0.00 0.00 624321.51 8368.44 1245385.65 00:06:11.959 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:11.959 Verification LBA range: start 0x0 length 0x2000 00:06:11.959 Nvme3n1 : 5.94 139.99 8.75 0.00 0.00 763021.88 1701.42 1025991.29 00:06:11.959 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:11.959 Verification LBA range: start 0x2000 length 0x2000 00:06:11.959 Nvme3n1 : 6.49 315.68 19.73 0.00 0.00 304578.54 393.85 1290555.08 00:06:11.959 [2024-12-12T20:16:56.187Z] =================================================================================================================== 00:06:11.959 [2024-12-12T20:16:56.187Z] Total : 1611.69 100.73 0.00 0.00 815847.64 393.85 1742249.35 00:06:13.873 ************************************ 00:06:13.873 END TEST bdev_verify_big_io 00:06:13.873 ************************************ 00:06:13.873 00:06:13.873 real 0m9.337s 00:06:13.873 user 0m17.660s 00:06:13.873 sys 0m0.268s 00:06:13.873 20:16:57 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:13.873 20:16:57 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:06:13.873 20:16:57 blockdev_nvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:13.873 20:16:57 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:13.873 20:16:57 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:13.873 20:16:57 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:13.873 ************************************ 00:06:13.873 START TEST bdev_write_zeroes 00:06:13.873 ************************************ 00:06:13.873 20:16:57 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:13.873 [2024-12-12 20:16:57.929194] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:06:13.873 [2024-12-12 20:16:57.929281] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62434 ] 00:06:13.873 [2024-12-12 20:16:58.081857] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.134 [2024-12-12 20:16:58.188362] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.763 Running I/O for 1 seconds... 00:06:15.706 68736.00 IOPS, 268.50 MiB/s 00:06:15.706 Latency(us) 00:06:15.706 [2024-12-12T20:16:59.934Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:15.706 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:15.706 Nvme0n1 : 1.03 11329.75 44.26 0.00 0.00 11274.67 9023.80 27424.30 00:06:15.706 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:15.706 Nvme1n1 : 1.03 11317.06 44.21 0.00 0.00 11273.92 9124.63 26819.35 00:06:15.706 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:15.706 Nvme2n1 : 1.03 11304.33 44.16 0.00 0.00 11243.13 9175.04 25811.10 00:06:15.706 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:15.706 Nvme2n2 : 1.03 11291.52 44.11 0.00 0.00 11224.14 9175.04 25508.63 00:06:15.706 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:15.706 Nvme2n3 : 1.03 11278.96 44.06 0.00 0.00 11196.17 8620.50 26819.35 00:06:15.706 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:15.706 Nvme3n1 : 1.03 11266.44 44.01 0.00 0.00 11178.00 5898.24 28029.24 00:06:15.706 [2024-12-12T20:16:59.934Z] =================================================================================================================== 00:06:15.706 [2024-12-12T20:16:59.934Z] Total : 67788.05 264.80 0.00 0.00 11231.67 5898.24 28029.24 00:06:16.648 00:06:16.648 real 0m2.692s 00:06:16.648 user 0m2.385s 00:06:16.648 sys 0m0.192s 00:06:16.648 20:17:00 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.648 ************************************ 00:06:16.648 END TEST bdev_write_zeroes 00:06:16.648 ************************************ 00:06:16.648 20:17:00 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:06:16.648 20:17:00 blockdev_nvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:16.648 20:17:00 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:16.648 20:17:00 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.648 20:17:00 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:16.648 ************************************ 00:06:16.648 START TEST bdev_json_nonenclosed 00:06:16.648 ************************************ 00:06:16.648 20:17:00 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:16.648 [2024-12-12 20:17:00.663448] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:06:16.648 [2024-12-12 20:17:00.663634] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62487 ] 00:06:16.648 [2024-12-12 20:17:00.817021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.909 [2024-12-12 20:17:00.913634] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.909 [2024-12-12 20:17:00.913832] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:06:16.909 [2024-12-12 20:17:00.913921] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:16.909 [2024-12-12 20:17:00.913944] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:16.909 00:06:16.909 real 0m0.476s 00:06:16.909 user 0m0.291s 00:06:16.909 sys 0m0.081s 00:06:16.909 20:17:01 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.909 ************************************ 00:06:16.909 END TEST bdev_json_nonenclosed 00:06:16.909 ************************************ 00:06:16.909 20:17:01 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:06:16.909 20:17:01 blockdev_nvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:16.909 20:17:01 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:16.909 20:17:01 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.909 20:17:01 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:16.909 ************************************ 00:06:16.909 START TEST bdev_json_nonarray 00:06:16.909 ************************************ 00:06:16.909 20:17:01 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:17.174 [2024-12-12 20:17:01.185029] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:06:17.174 [2024-12-12 20:17:01.185256] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62513 ] 00:06:17.174 [2024-12-12 20:17:01.345123] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.437 [2024-12-12 20:17:01.438710] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.437 [2024-12-12 20:17:01.438787] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:06:17.437 [2024-12-12 20:17:01.438803] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:17.437 [2024-12-12 20:17:01.438812] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:17.437 00:06:17.437 real 0m0.498s 00:06:17.437 user 0m0.300s 00:06:17.437 sys 0m0.094s 00:06:17.437 20:17:01 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:17.437 ************************************ 00:06:17.437 END TEST bdev_json_nonarray 00:06:17.437 ************************************ 00:06:17.437 20:17:01 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:06:17.437 20:17:01 blockdev_nvme -- bdev/blockdev.sh@824 -- # [[ nvme == bdev ]] 00:06:17.437 20:17:01 blockdev_nvme -- bdev/blockdev.sh@832 -- # [[ nvme == gpt ]] 00:06:17.437 20:17:01 blockdev_nvme -- bdev/blockdev.sh@836 -- # [[ nvme == crypto_sw ]] 00:06:17.437 20:17:01 blockdev_nvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:06:17.437 20:17:01 blockdev_nvme -- bdev/blockdev.sh@849 -- # cleanup 00:06:17.437 20:17:01 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:06:17.437 20:17:01 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:17.437 20:17:01 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:06:17.437 20:17:01 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:06:17.437 20:17:01 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:06:17.437 20:17:01 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:06:17.437 ************************************ 00:06:17.437 END TEST blockdev_nvme 00:06:17.437 ************************************ 00:06:17.437 00:06:17.437 real 0m36.995s 00:06:17.437 user 0m58.137s 00:06:17.437 sys 0m5.052s 00:06:17.437 20:17:01 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:17.437 20:17:01 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:17.698 20:17:01 -- spdk/autotest.sh@209 -- # uname -s 00:06:17.698 20:17:01 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:06:17.698 20:17:01 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:17.698 20:17:01 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:17.698 20:17:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:17.698 20:17:01 -- common/autotest_common.sh@10 -- # set +x 00:06:17.698 ************************************ 00:06:17.698 START TEST blockdev_nvme_gpt 00:06:17.698 ************************************ 00:06:17.698 20:17:01 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:17.698 * Looking for test storage... 00:06:17.698 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:17.698 20:17:01 blockdev_nvme_gpt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:17.698 20:17:01 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lcov --version 00:06:17.698 20:17:01 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:17.698 20:17:01 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:17.698 20:17:01 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:17.698 20:17:01 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:17.698 20:17:01 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:17.698 20:17:01 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:06:17.698 20:17:01 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:06:17.698 20:17:01 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:06:17.698 20:17:01 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:06:17.698 20:17:01 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:06:17.698 20:17:01 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:06:17.698 20:17:01 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:06:17.698 20:17:01 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:17.698 20:17:01 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:06:17.698 20:17:01 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:06:17.698 20:17:01 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:17.698 20:17:01 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:17.698 20:17:01 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:06:17.698 20:17:01 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:06:17.699 20:17:01 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:17.699 20:17:01 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:06:17.699 20:17:01 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:06:17.699 20:17:01 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:06:17.699 20:17:01 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:06:17.699 20:17:01 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:17.699 20:17:01 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:06:17.699 20:17:01 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:06:17.699 20:17:01 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:17.699 20:17:01 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:17.699 20:17:01 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:06:17.699 20:17:01 blockdev_nvme_gpt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:17.699 20:17:01 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:17.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.699 --rc genhtml_branch_coverage=1 00:06:17.699 --rc genhtml_function_coverage=1 00:06:17.699 --rc genhtml_legend=1 00:06:17.699 --rc geninfo_all_blocks=1 00:06:17.699 --rc geninfo_unexecuted_blocks=1 00:06:17.699 00:06:17.699 ' 00:06:17.699 20:17:01 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:17.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.699 --rc genhtml_branch_coverage=1 00:06:17.699 --rc genhtml_function_coverage=1 00:06:17.699 --rc genhtml_legend=1 00:06:17.699 --rc geninfo_all_blocks=1 00:06:17.699 --rc geninfo_unexecuted_blocks=1 00:06:17.699 00:06:17.699 ' 00:06:17.699 20:17:01 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:17.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.699 --rc genhtml_branch_coverage=1 00:06:17.699 --rc genhtml_function_coverage=1 00:06:17.699 --rc genhtml_legend=1 00:06:17.699 --rc geninfo_all_blocks=1 00:06:17.699 --rc geninfo_unexecuted_blocks=1 00:06:17.699 00:06:17.699 ' 00:06:17.699 20:17:01 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:17.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.699 --rc genhtml_branch_coverage=1 00:06:17.699 --rc genhtml_function_coverage=1 00:06:17.699 --rc genhtml_legend=1 00:06:17.699 --rc geninfo_all_blocks=1 00:06:17.699 --rc geninfo_unexecuted_blocks=1 00:06:17.699 00:06:17.699 ' 00:06:17.699 20:17:01 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:17.699 20:17:01 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:06:17.699 20:17:01 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:17.699 20:17:01 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:17.699 20:17:01 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:17.699 20:17:01 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:17.699 20:17:01 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:17.699 20:17:01 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:17.699 20:17:01 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:06:17.699 20:17:01 blockdev_nvme_gpt -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:06:17.699 20:17:01 blockdev_nvme_gpt -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:06:17.699 20:17:01 blockdev_nvme_gpt -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:06:17.699 20:17:01 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # uname -s 00:06:17.699 20:17:01 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:06:17.699 20:17:01 blockdev_nvme_gpt -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:06:17.699 20:17:01 blockdev_nvme_gpt -- bdev/blockdev.sh@719 -- # test_type=gpt 00:06:17.699 20:17:01 blockdev_nvme_gpt -- bdev/blockdev.sh@720 -- # crypto_device= 00:06:17.699 20:17:01 blockdev_nvme_gpt -- bdev/blockdev.sh@721 -- # dek= 00:06:17.699 20:17:01 blockdev_nvme_gpt -- bdev/blockdev.sh@722 -- # env_ctx= 00:06:17.699 20:17:01 blockdev_nvme_gpt -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:06:17.699 20:17:01 blockdev_nvme_gpt -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:06:17.699 20:17:01 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == bdev ]] 00:06:17.699 20:17:01 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == crypto_* ]] 00:06:17.699 20:17:01 blockdev_nvme_gpt -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:06:17.699 20:17:01 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62591 00:06:17.699 20:17:01 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:17.699 20:17:01 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 62591 00:06:17.699 20:17:01 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 62591 ']' 00:06:17.699 20:17:01 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:17.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.699 20:17:01 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.699 20:17:01 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:17.699 20:17:01 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.699 20:17:01 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:17.699 20:17:01 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:17.699 [2024-12-12 20:17:01.897617] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:06:17.699 [2024-12-12 20:17:01.897890] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62591 ] 00:06:17.960 [2024-12-12 20:17:02.057270] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.960 [2024-12-12 20:17:02.152726] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.532 20:17:02 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:18.532 20:17:02 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:06:18.532 20:17:02 blockdev_nvme_gpt -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:06:18.532 20:17:02 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # setup_gpt_conf 00:06:18.532 20:17:02 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:18.793 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:19.054 Waiting for block devices as requested 00:06:19.054 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:19.054 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:19.054 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:06:19.315 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:06:24.647 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:06:24.647 20:17:08 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:06:24.647 20:17:08 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:06:24.647 20:17:08 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:06:24.647 20:17:08 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:06:24.647 20:17:08 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:06:24.647 20:17:08 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:06:24.647 20:17:08 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:06:24.647 20:17:08 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:06:24.647 20:17:08 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:24.647 20:17:08 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:06:24.647 20:17:08 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:06:24.647 20:17:08 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:24.647 20:17:08 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:24.647 20:17:08 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:06:24.647 20:17:08 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:06:24.647 20:17:08 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:24.647 20:17:08 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:06:24.647 20:17:08 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:06:24.647 20:17:08 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:24.647 20:17:08 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:24.647 20:17:08 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:06:24.647 20:17:08 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:06:24.647 20:17:08 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:24.647 20:17:08 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:06:24.647 20:17:08 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:06:24.647 20:17:08 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:06:24.647 20:17:08 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:24.647 20:17:08 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:24.647 20:17:08 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n2 00:06:24.647 20:17:08 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:06:24.647 20:17:08 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:06:24.647 20:17:08 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:24.647 20:17:08 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:24.647 20:17:08 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n3 00:06:24.647 20:17:08 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:06:24.647 20:17:08 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:06:24.647 20:17:08 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:24.647 20:17:08 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:06:24.647 20:17:08 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:06:24.647 20:17:08 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:24.647 20:17:08 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1 00:06:24.647 20:17:08 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:06:24.647 20:17:08 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:06:24.647 20:17:08 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:24.647 20:17:08 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:06:24.647 20:17:08 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:06:24.647 20:17:08 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:06:24.647 20:17:08 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:06:24.647 20:17:08 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:06:24.647 20:17:08 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:06:24.647 20:17:08 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:06:24.647 20:17:08 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:06:24.647 BYT; 00:06:24.647 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:06:24.647 20:17:08 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:06:24.647 BYT; 00:06:24.647 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:06:24.647 20:17:08 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:06:24.647 20:17:08 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:06:24.647 20:17:08 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:06:24.647 20:17:08 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:06:24.647 20:17:08 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:06:24.647 20:17:08 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:06:24.647 20:17:08 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:06:24.647 20:17:08 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:06:24.647 20:17:08 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:24.647 20:17:08 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:24.647 20:17:08 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:06:24.647 20:17:08 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:06:24.647 20:17:08 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:24.647 20:17:08 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:06:24.647 20:17:08 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:24.647 20:17:08 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:24.647 20:17:08 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:24.647 20:17:08 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:06:24.647 20:17:08 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:06:24.647 20:17:08 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:24.647 20:17:08 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:24.647 20:17:08 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:06:24.647 20:17:08 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:06:24.647 20:17:08 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:24.647 20:17:08 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:06:24.647 20:17:08 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:24.647 20:17:08 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:24.647 20:17:08 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:24.647 20:17:08 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:06:25.670 The operation has completed successfully. 00:06:25.670 20:17:09 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:06:26.614 The operation has completed successfully. 00:06:26.614 20:17:10 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:26.875 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:27.449 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:27.449 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:27.449 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:27.449 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:06:27.449 20:17:11 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:06:27.449 20:17:11 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:27.449 20:17:11 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:27.449 [] 00:06:27.449 20:17:11 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:27.449 20:17:11 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:06:27.449 20:17:11 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:06:27.449 20:17:11 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:27.449 20:17:11 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:27.711 20:17:11 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:27.711 20:17:11 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:27.711 20:17:11 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:27.972 20:17:11 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:27.972 20:17:11 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:06:27.972 20:17:11 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:27.972 20:17:11 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:27.972 20:17:11 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:27.972 20:17:11 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # cat 00:06:27.972 20:17:11 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:06:27.972 20:17:11 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:27.972 20:17:11 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:27.972 20:17:11 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:27.972 20:17:11 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:06:27.972 20:17:11 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:27.972 20:17:11 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:27.972 20:17:11 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:27.972 20:17:11 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:27.972 20:17:11 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:27.972 20:17:11 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:27.972 20:17:12 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:27.972 20:17:12 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:06:27.972 20:17:12 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:06:27.972 20:17:12 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:06:27.972 20:17:12 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:27.972 20:17:12 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:27.972 20:17:12 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:27.972 20:17:12 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:06:27.973 20:17:12 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "e435d957-8c6e-4dc9-9a1c-270776c19802"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "e435d957-8c6e-4dc9-9a1c-270776c19802",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "562a3dd8-6e5a-420a-bfbd-8cf48672cad6"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "562a3dd8-6e5a-420a-bfbd-8cf48672cad6",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "e3199c07-eaea-47d2-bca5-8d7a11c802d3"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e3199c07-eaea-47d2-bca5-8d7a11c802d3",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' 20:17:12 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # jq -r .name 00:06:27.973 ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "9187d2a4-e619-42db-b30c-c0d174ed3c60"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "9187d2a4-e619-42db-b30c-c0d174ed3c60",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "31942e09-4830-4250-94f1-079a2abf28bf"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "31942e09-4830-4250-94f1-079a2abf28bf",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:27.973 20:17:12 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:06:27.973 20:17:12 blockdev_nvme_gpt -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:06:27.973 20:17:12 blockdev_nvme_gpt -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:06:27.973 20:17:12 blockdev_nvme_gpt -- bdev/blockdev.sh@791 -- # killprocess 62591 00:06:27.973 20:17:12 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 62591 ']' 00:06:27.973 20:17:12 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 62591 00:06:27.973 20:17:12 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:06:27.973 20:17:12 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:27.973 20:17:12 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62591 00:06:27.973 killing process with pid 62591 00:06:27.973 20:17:12 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:27.973 20:17:12 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:27.973 20:17:12 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62591' 00:06:27.973 20:17:12 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 62591 00:06:27.973 20:17:12 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 62591 00:06:29.359 20:17:13 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:29.359 20:17:13 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:29.359 20:17:13 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:06:29.359 20:17:13 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:29.359 20:17:13 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:29.359 ************************************ 00:06:29.359 START TEST bdev_hello_world 00:06:29.359 ************************************ 00:06:29.359 20:17:13 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:29.359 [2024-12-12 20:17:13.391116] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:06:29.359 [2024-12-12 20:17:13.391229] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63213 ] 00:06:29.359 [2024-12-12 20:17:13.547740] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.620 [2024-12-12 20:17:13.632383] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.193 [2024-12-12 20:17:14.133920] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:30.193 [2024-12-12 20:17:14.133966] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:30.193 [2024-12-12 20:17:14.133985] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:30.193 [2024-12-12 20:17:14.135994] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:30.193 [2024-12-12 20:17:14.136520] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:30.193 [2024-12-12 20:17:14.136545] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:30.193 [2024-12-12 20:17:14.136769] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:30.193 00:06:30.193 [2024-12-12 20:17:14.136788] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:30.766 ************************************ 00:06:30.766 END TEST bdev_hello_world 00:06:30.766 ************************************ 00:06:30.766 00:06:30.766 real 0m1.383s 00:06:30.766 user 0m1.122s 00:06:30.766 sys 0m0.156s 00:06:30.766 20:17:14 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:30.766 20:17:14 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:30.766 20:17:14 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:06:30.766 20:17:14 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:30.766 20:17:14 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:30.766 20:17:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:30.766 ************************************ 00:06:30.766 START TEST bdev_bounds 00:06:30.766 ************************************ 00:06:30.766 20:17:14 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:06:30.766 20:17:14 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=63250 00:06:30.766 20:17:14 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:30.766 20:17:14 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 63250' 00:06:30.766 Process bdevio pid: 63250 00:06:30.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.766 20:17:14 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:30.766 20:17:14 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 63250 00:06:30.766 20:17:14 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 63250 ']' 00:06:30.766 20:17:14 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.766 20:17:14 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:30.766 20:17:14 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.766 20:17:14 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:30.766 20:17:14 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:30.766 [2024-12-12 20:17:14.822056] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:06:30.766 [2024-12-12 20:17:14.822199] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63250 ] 00:06:30.766 [2024-12-12 20:17:14.977077] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:31.027 [2024-12-12 20:17:15.065709] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:31.027 [2024-12-12 20:17:15.065982] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:31.027 [2024-12-12 20:17:15.066055] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.598 20:17:15 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:31.598 20:17:15 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:06:31.598 20:17:15 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:31.598 I/O targets: 00:06:31.598 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:31.598 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:06:31.598 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:06:31.598 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:31.598 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:31.598 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:31.598 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:31.598 00:06:31.598 00:06:31.598 CUnit - A unit testing framework for C - Version 2.1-3 00:06:31.598 http://cunit.sourceforge.net/ 00:06:31.598 00:06:31.598 00:06:31.598 Suite: bdevio tests on: Nvme3n1 00:06:31.598 Test: blockdev write read block ...passed 00:06:31.598 Test: blockdev write zeroes read block ...passed 00:06:31.598 Test: blockdev write zeroes read no split ...passed 00:06:31.598 Test: blockdev write zeroes read split ...passed 00:06:31.598 Test: blockdev write zeroes read split partial ...passed 00:06:31.598 Test: blockdev reset ...[2024-12-12 20:17:15.778664] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:06:31.598 [2024-12-12 20:17:15.781234] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:06:31.598 passed 00:06:31.598 Test: blockdev write read 8 blocks ...passed 00:06:31.598 Test: blockdev write read size > 128k ...passed 00:06:31.598 Test: blockdev write read invalid size ...passed 00:06:31.598 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:31.598 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:31.598 Test: blockdev write read max offset ...passed 00:06:31.598 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:31.598 Test: blockdev writev readv 8 blocks ...passed 00:06:31.598 Test: blockdev writev readv 30 x 1block ...passed 00:06:31.598 Test: blockdev writev readv block ...passed 00:06:31.598 Test: blockdev writev readv size > 128k ...passed 00:06:31.598 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:31.598 Test: blockdev comparev and writev ...[2024-12-12 20:17:15.787769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 00:06:31.598 Test: blockdev nvme passthru rw ...passed 00:06:31.598 Test: blockdev nvme passthru vendor specific ...SGL DATA BLOCK ADDRESS 0x2d7c04000 len:0x1000 00:06:31.598 [2024-12-12 20:17:15.787904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:31.598 passed 00:06:31.598 Test: blockdev nvme admin passthru ...[2024-12-12 20:17:15.788509] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:31.598 [2024-12-12 20:17:15.788534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:31.598 passed 00:06:31.598 Test: blockdev copy ...passed 00:06:31.598 Suite: bdevio tests on: Nvme2n3 00:06:31.598 Test: blockdev write read block ...passed 00:06:31.598 Test: blockdev write zeroes read block ...passed 00:06:31.598 Test: blockdev write zeroes read no split ...passed 00:06:31.598 Test: blockdev write zeroes read split ...passed 00:06:31.859 Test: blockdev write zeroes read split partial ...passed 00:06:31.859 Test: blockdev reset ...[2024-12-12 20:17:15.831115] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:31.859 [2024-12-12 20:17:15.833945] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:31.859 passed 00:06:31.859 Test: blockdev write read 8 blocks ...passed 00:06:31.859 Test: blockdev write read size > 128k ...passed 00:06:31.859 Test: blockdev write read invalid size ...passed 00:06:31.859 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:31.859 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:31.859 Test: blockdev write read max offset ...passed 00:06:31.859 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:31.859 Test: blockdev writev readv 8 blocks ...passed 00:06:31.859 Test: blockdev writev readv 30 x 1block ...passed 00:06:31.859 Test: blockdev writev readv block ...passed 00:06:31.859 Test: blockdev writev readv size > 128k ...passed 00:06:31.859 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:31.859 Test: blockdev comparev and writev ...[2024-12-12 20:17:15.840975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d7c02000 len:0x1000 00:06:31.859 [2024-12-12 20:17:15.841100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:31.859 passed 00:06:31.859 Test: blockdev nvme passthru rw ...passed 00:06:31.859 Test: blockdev nvme passthru vendor specific ...[2024-12-12 20:17:15.841720] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:31.859 [2024-12-12 20:17:15.841779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed 00:06:31.859 Test: blockdev nvme admin passthru ... sqhd:001c p:1 m:0 dnr:1 00:06:31.859 passed 00:06:31.859 Test: blockdev copy ...passed 00:06:31.859 Suite: bdevio tests on: Nvme2n2 00:06:31.859 Test: blockdev write read block ...passed 00:06:31.859 Test: blockdev write zeroes read block ...passed 00:06:31.859 Test: blockdev write zeroes read no split ...passed 00:06:31.859 Test: blockdev write zeroes read split ...passed 00:06:31.859 Test: blockdev write zeroes read split partial ...passed 00:06:31.859 Test: blockdev reset ...[2024-12-12 20:17:15.897938] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:31.859 [2024-12-12 20:17:15.901487] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:31.859 passed 00:06:31.859 Test: blockdev write read 8 blocks ...passed 00:06:31.859 Test: blockdev write read size > 128k ...passed 00:06:31.859 Test: blockdev write read invalid size ...passed 00:06:31.859 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:31.859 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:31.859 Test: blockdev write read max offset ...passed 00:06:31.859 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:31.859 Test: blockdev writev readv 8 blocks ...passed 00:06:31.859 Test: blockdev writev readv 30 x 1block ...passed 00:06:31.859 Test: blockdev writev readv block ...passed 00:06:31.859 Test: blockdev writev readv size > 128k ...passed 00:06:31.859 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:31.859 Test: blockdev comparev and writev ...[2024-12-12 20:17:15.909756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2db838000 len:0x1000 00:06:31.859 [2024-12-12 20:17:15.909862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:31.859 passed 00:06:31.859 Test: blockdev nvme passthru rw ...passed 00:06:31.859 Test: blockdev nvme passthru vendor specific ...[2024-12-12 20:17:15.910981] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1passed 00:06:31.859 Test: blockdev nvme admin passthru ... cid:190 PRP1 0x0 PRP2 0x0 00:06:31.859 [2024-12-12 20:17:15.911256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:31.859 passed 00:06:31.859 Test: blockdev copy ...passed 00:06:31.859 Suite: bdevio tests on: Nvme2n1 00:06:31.859 Test: blockdev write read block ...passed 00:06:31.859 Test: blockdev write zeroes read block ...passed 00:06:31.859 Test: blockdev write zeroes read no split ...passed 00:06:31.859 Test: blockdev write zeroes read split ...passed 00:06:31.859 Test: blockdev write zeroes read split partial ...passed 00:06:31.859 Test: blockdev reset ...[2024-12-12 20:17:15.968537] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:31.859 [2024-12-12 20:17:15.971404] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spasseduccessful. 00:06:31.859 00:06:31.859 Test: blockdev write read 8 blocks ...passed 00:06:31.859 Test: blockdev write read size > 128k ...passed 00:06:31.859 Test: blockdev write read invalid size ...passed 00:06:31.859 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:31.859 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:31.859 Test: blockdev write read max offset ...passed 00:06:31.859 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:31.859 Test: blockdev writev readv 8 blocks ...passed 00:06:31.859 Test: blockdev writev readv 30 x 1block ...passed 00:06:31.859 Test: blockdev writev readv block ...passed 00:06:31.859 Test: blockdev writev readv size > 128k ...passed 00:06:31.859 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:31.859 Test: blockdev comparev and writev ...[2024-12-12 20:17:15.979127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2db834000 len:0x1000 00:06:31.859 [2024-12-12 20:17:15.979271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:31.859 passed 00:06:31.859 Test: blockdev nvme passthru rw ...passed 00:06:31.859 Test: blockdev nvme passthru vendor specific ...[2024-12-12 20:17:15.980490] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:31.859 [2024-12-12 20:17:15.980633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:31.859 passed 00:06:31.859 Test: blockdev nvme admin passthru ...passed 00:06:31.859 Test: blockdev copy ...passed 00:06:31.859 Suite: bdevio tests on: Nvme1n1p2 00:06:31.859 Test: blockdev write read block ...passed 00:06:31.859 Test: blockdev write zeroes read block ...passed 00:06:31.859 Test: blockdev write zeroes read no split ...passed 00:06:31.859 Test: blockdev write zeroes read split ...passed 00:06:31.859 Test: blockdev write zeroes read split partial ...passed 00:06:31.859 Test: blockdev reset ...[2024-12-12 20:17:16.035574] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:31.859 [2024-12-12 20:17:16.038208] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:06:31.859 passed 00:06:31.859 Test: blockdev write read 8 blocks ...passed 00:06:31.860 Test: blockdev write read size > 128k ...passed 00:06:31.860 Test: blockdev write read invalid size ...passed 00:06:31.860 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:31.860 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:31.860 Test: blockdev write read max offset ...passed 00:06:31.860 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:31.860 Test: blockdev writev readv 8 blocks ...passed 00:06:31.860 Test: blockdev writev readv 30 x 1block ...passed 00:06:31.860 Test: blockdev writev readv block ...passed 00:06:31.860 Test: blockdev writev readv size > 128k ...passed 00:06:31.860 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:31.860 Test: blockdev comparev and writev ...[2024-12-12 20:17:16.045789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 lpassed 00:06:31.860 Test: blockdev nvme passthru rw ...passed 00:06:31.860 Test: blockdev nvme passthru vendor specific ...passed 00:06:31.860 Test: blockdev nvme admin passthru ...passed 00:06:31.860 Test: blockdev copy ...en:1 SGL DATA BLOCK ADDRESS 0x2db830000 len:0x1000 00:06:31.860 [2024-12-12 20:17:16.045899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:31.860 passed 00:06:31.860 Suite: bdevio tests on: Nvme1n1p1 00:06:31.860 Test: blockdev write read block ...passed 00:06:31.860 Test: blockdev write zeroes read block ...passed 00:06:31.860 Test: blockdev write zeroes read no split ...passed 00:06:31.860 Test: blockdev write zeroes read split ...passed 00:06:32.121 Test: blockdev write zeroes read split partial ...passed 00:06:32.121 Test: blockdev reset ...[2024-12-12 20:17:16.089570] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:32.121 [2024-12-12 20:17:16.092037] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:06:32.121 passed 00:06:32.121 Test: blockdev write read 8 blocks ...passed 00:06:32.121 Test: blockdev write read size > 128k ...passed 00:06:32.121 Test: blockdev write read invalid size ...passed 00:06:32.121 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:32.121 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:32.121 Test: blockdev write read max offset ...passed 00:06:32.121 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:32.121 Test: blockdev writev readv 8 blocks ...passed 00:06:32.121 Test: blockdev writev readv 30 x 1block ...passed 00:06:32.121 Test: blockdev writev readv block ...passed 00:06:32.121 Test: blockdev writev readv size > 128k ...passed 00:06:32.121 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:32.121 Test: blockdev comparev and writev ...[2024-12-12 20:17:16.099934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2d7a0e000 len:0x1000 00:06:32.121 [2024-12-12 20:17:16.100052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:32.121 passed 00:06:32.121 Test: blockdev nvme passthru rw ...passed 00:06:32.121 Test: blockdev nvme passthru vendor specific ...passed 00:06:32.121 Test: blockdev nvme admin passthru ...passed 00:06:32.121 Test: blockdev copy ...passed 00:06:32.121 Suite: bdevio tests on: Nvme0n1 00:06:32.121 Test: blockdev write read block ...passed 00:06:32.121 Test: blockdev write zeroes read block ...passed 00:06:32.121 Test: blockdev write zeroes read no split ...passed 00:06:32.121 Test: blockdev write zeroes read split ...passed 00:06:32.121 Test: blockdev write zeroes read split partial ...passed 00:06:32.121 Test: blockdev reset ...[2024-12-12 20:17:16.143700] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:06:32.121 passed 00:06:32.121 Test: blockdev write read 8 blocks ...[2024-12-12 20:17:16.146166] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:06:32.121 passed 00:06:32.121 Test: blockdev write read size > 128k ...passed 00:06:32.121 Test: blockdev write read invalid size ...passed 00:06:32.121 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:32.121 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:32.121 Test: blockdev write read max offset ...passed 00:06:32.121 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:32.121 Test: blockdev writev readv 8 blocks ...passed 00:06:32.121 Test: blockdev writev readv 30 x 1block ...passed 00:06:32.121 Test: blockdev writev readv block ...passed 00:06:32.121 Test: blockdev writev readv size > 128k ...passed 00:06:32.121 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:32.121 Test: blockdev comparev and writev ...passed 00:06:32.121 Test: blockdev nvme passthru rw ...[2024-12-12 20:17:16.151363] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:32.121 separate metadata which is not supported yet. 00:06:32.121 passed 00:06:32.121 Test: blockdev nvme passthru vendor specific ...passed 00:06:32.121 Test: blockdev nvme admin passthru ...[2024-12-12 20:17:16.151953] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:06:32.121 [2024-12-12 20:17:16.151988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:32.121 passed 00:06:32.121 Test: blockdev copy ...passed 00:06:32.121 00:06:32.121 Run Summary: Type Total Ran Passed Failed Inactive 00:06:32.122 suites 7 7 n/a 0 0 00:06:32.122 tests 161 161 161 0 0 00:06:32.122 asserts 1025 1025 1025 0 n/a 00:06:32.122 00:06:32.122 Elapsed time = 1.110 seconds 00:06:32.122 0 00:06:32.122 20:17:16 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 63250 00:06:32.122 20:17:16 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 63250 ']' 00:06:32.122 20:17:16 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 63250 00:06:32.122 20:17:16 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:06:32.122 20:17:16 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:32.122 20:17:16 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63250 00:06:32.122 20:17:16 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:32.122 20:17:16 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:32.122 killing process with pid 63250 00:06:32.122 20:17:16 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63250' 00:06:32.122 20:17:16 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 63250 00:06:32.122 20:17:16 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 63250 00:06:32.694 20:17:16 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:32.694 00:06:32.694 real 0m1.997s 00:06:32.694 user 0m5.113s 00:06:32.694 sys 0m0.268s 00:06:32.694 20:17:16 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:32.694 ************************************ 00:06:32.694 20:17:16 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:32.694 END TEST bdev_bounds 00:06:32.694 ************************************ 00:06:32.694 20:17:16 blockdev_nvme_gpt -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:32.694 20:17:16 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:32.694 20:17:16 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:32.694 20:17:16 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:32.694 ************************************ 00:06:32.694 START TEST bdev_nbd 00:06:32.694 ************************************ 00:06:32.694 20:17:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:32.694 20:17:16 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:32.694 20:17:16 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:32.694 20:17:16 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.694 20:17:16 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:32.694 20:17:16 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:32.694 20:17:16 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:32.694 20:17:16 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:06:32.694 20:17:16 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:32.694 20:17:16 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:32.694 20:17:16 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:32.694 20:17:16 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:06:32.694 20:17:16 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:32.694 20:17:16 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:32.694 20:17:16 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:32.694 20:17:16 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:32.694 20:17:16 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=63303 00:06:32.694 20:17:16 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:32.694 20:17:16 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 63303 /var/tmp/spdk-nbd.sock 00:06:32.694 20:17:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 63303 ']' 00:06:32.694 20:17:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:32.694 20:17:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:32.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:32.694 20:17:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:32.694 20:17:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:32.694 20:17:16 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:32.694 20:17:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:32.694 [2024-12-12 20:17:16.852989] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:06:32.694 [2024-12-12 20:17:16.853104] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:32.956 [2024-12-12 20:17:17.009208] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.956 [2024-12-12 20:17:17.092778] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.528 20:17:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:33.528 20:17:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:06:33.528 20:17:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:33.528 20:17:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.528 20:17:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:33.528 20:17:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:33.528 20:17:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:33.528 20:17:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.528 20:17:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:33.528 20:17:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:33.528 20:17:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:33.528 20:17:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:33.528 20:17:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:33.528 20:17:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:33.528 20:17:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:33.788 20:17:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:33.788 20:17:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:33.788 20:17:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:33.788 20:17:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:33.788 20:17:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:33.788 20:17:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:33.788 20:17:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:33.788 20:17:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:33.788 20:17:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:33.788 20:17:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:33.788 20:17:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:33.788 20:17:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:33.788 1+0 records in 00:06:33.788 1+0 records out 00:06:33.788 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000280813 s, 14.6 MB/s 00:06:33.788 20:17:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:33.788 20:17:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:33.788 20:17:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:33.788 20:17:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:33.788 20:17:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:33.788 20:17:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:33.788 20:17:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:33.788 20:17:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:06:34.050 20:17:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:34.050 20:17:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:34.050 20:17:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:34.050 20:17:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:34.050 20:17:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:34.050 20:17:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:34.050 20:17:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:34.050 20:17:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:34.050 20:17:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:34.050 20:17:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:34.050 20:17:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:34.050 20:17:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:34.050 1+0 records in 00:06:34.050 1+0 records out 00:06:34.050 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000292833 s, 14.0 MB/s 00:06:34.050 20:17:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:34.050 20:17:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:34.050 20:17:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:34.050 20:17:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:34.050 20:17:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:34.050 20:17:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:34.050 20:17:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:34.050 20:17:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:06:34.311 20:17:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:34.311 20:17:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:34.311 20:17:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:34.311 20:17:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:06:34.311 20:17:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:34.311 20:17:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:34.311 20:17:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:34.311 20:17:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:06:34.311 20:17:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:34.311 20:17:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:34.311 20:17:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:34.311 20:17:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:34.311 1+0 records in 00:06:34.311 1+0 records out 00:06:34.311 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000272559 s, 15.0 MB/s 00:06:34.311 20:17:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:34.311 20:17:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:34.311 20:17:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:34.311 20:17:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:34.311 20:17:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:34.311 20:17:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:34.311 20:17:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:34.311 20:17:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:34.571 20:17:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:34.571 20:17:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:34.571 20:17:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:34.572 20:17:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:06:34.572 20:17:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:34.572 20:17:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:34.572 20:17:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:34.572 20:17:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:06:34.572 20:17:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:34.572 20:17:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:34.572 20:17:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:34.572 20:17:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:34.572 1+0 records in 00:06:34.572 1+0 records out 00:06:34.572 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000355978 s, 11.5 MB/s 00:06:34.572 20:17:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:34.572 20:17:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:34.572 20:17:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:34.572 20:17:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:34.572 20:17:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:34.572 20:17:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:34.572 20:17:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:34.572 20:17:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:34.572 20:17:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:34.572 20:17:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:34.572 20:17:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:34.572 20:17:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:06:34.572 20:17:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:34.572 20:17:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:34.572 20:17:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:34.572 20:17:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:06:34.572 20:17:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:34.572 20:17:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:34.572 20:17:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:34.572 20:17:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:34.572 1+0 records in 00:06:34.572 1+0 records out 00:06:34.572 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000555227 s, 7.4 MB/s 00:06:34.572 20:17:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:34.572 20:17:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:34.572 20:17:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:34.572 20:17:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:34.572 20:17:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:34.572 20:17:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:34.572 20:17:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:34.572 20:17:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:34.833 20:17:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:34.833 20:17:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:34.833 20:17:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:34.833 20:17:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:06:34.833 20:17:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:34.833 20:17:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:34.833 20:17:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:34.833 20:17:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:06:34.833 20:17:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:34.833 20:17:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:34.833 20:17:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:34.833 20:17:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:34.833 1+0 records in 00:06:34.833 1+0 records out 00:06:34.833 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000415597 s, 9.9 MB/s 00:06:34.833 20:17:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:34.833 20:17:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:34.833 20:17:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:34.833 20:17:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:34.833 20:17:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:34.833 20:17:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:34.833 20:17:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:34.833 20:17:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:35.156 20:17:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:06:35.156 20:17:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:06:35.156 20:17:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:06:35.156 20:17:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:06:35.156 20:17:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:35.156 20:17:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:35.156 20:17:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:35.156 20:17:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:06:35.156 20:17:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:35.156 20:17:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:35.156 20:17:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:35.156 20:17:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:35.156 1+0 records in 00:06:35.156 1+0 records out 00:06:35.156 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000401287 s, 10.2 MB/s 00:06:35.156 20:17:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:35.156 20:17:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:35.156 20:17:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:35.156 20:17:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:35.156 20:17:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:35.156 20:17:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:35.156 20:17:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:35.156 20:17:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:35.416 20:17:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:35.416 { 00:06:35.416 "nbd_device": "/dev/nbd0", 00:06:35.416 "bdev_name": "Nvme0n1" 00:06:35.416 }, 00:06:35.416 { 00:06:35.416 "nbd_device": "/dev/nbd1", 00:06:35.416 "bdev_name": "Nvme1n1p1" 00:06:35.416 }, 00:06:35.416 { 00:06:35.416 "nbd_device": "/dev/nbd2", 00:06:35.416 "bdev_name": "Nvme1n1p2" 00:06:35.416 }, 00:06:35.416 { 00:06:35.416 "nbd_device": "/dev/nbd3", 00:06:35.416 "bdev_name": "Nvme2n1" 00:06:35.416 }, 00:06:35.416 { 00:06:35.416 "nbd_device": "/dev/nbd4", 00:06:35.416 "bdev_name": "Nvme2n2" 00:06:35.416 }, 00:06:35.416 { 00:06:35.416 "nbd_device": "/dev/nbd5", 00:06:35.416 "bdev_name": "Nvme2n3" 00:06:35.416 }, 00:06:35.416 { 00:06:35.416 "nbd_device": "/dev/nbd6", 00:06:35.416 "bdev_name": "Nvme3n1" 00:06:35.416 } 00:06:35.416 ]' 00:06:35.416 20:17:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:35.416 20:17:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:35.416 { 00:06:35.416 "nbd_device": "/dev/nbd0", 00:06:35.416 "bdev_name": "Nvme0n1" 00:06:35.416 }, 00:06:35.417 { 00:06:35.417 "nbd_device": "/dev/nbd1", 00:06:35.417 "bdev_name": "Nvme1n1p1" 00:06:35.417 }, 00:06:35.417 { 00:06:35.417 "nbd_device": "/dev/nbd2", 00:06:35.417 "bdev_name": "Nvme1n1p2" 00:06:35.417 }, 00:06:35.417 { 00:06:35.417 "nbd_device": "/dev/nbd3", 00:06:35.417 "bdev_name": "Nvme2n1" 00:06:35.417 }, 00:06:35.417 { 00:06:35.417 "nbd_device": "/dev/nbd4", 00:06:35.417 "bdev_name": "Nvme2n2" 00:06:35.417 }, 00:06:35.417 { 00:06:35.417 "nbd_device": "/dev/nbd5", 00:06:35.417 "bdev_name": "Nvme2n3" 00:06:35.417 }, 00:06:35.417 { 00:06:35.417 "nbd_device": "/dev/nbd6", 00:06:35.417 "bdev_name": "Nvme3n1" 00:06:35.417 } 00:06:35.417 ]' 00:06:35.417 20:17:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:35.417 20:17:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:06:35.417 20:17:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.417 20:17:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:06:35.417 20:17:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:35.417 20:17:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:35.417 20:17:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:35.417 20:17:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:35.677 20:17:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:35.677 20:17:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:35.677 20:17:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:35.677 20:17:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:35.677 20:17:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:35.677 20:17:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:35.677 20:17:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:35.677 20:17:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:35.677 20:17:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:35.677 20:17:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:35.938 20:17:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:35.938 20:17:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:35.938 20:17:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:35.938 20:17:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:35.938 20:17:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:35.938 20:17:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:35.938 20:17:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:35.938 20:17:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:35.938 20:17:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:35.938 20:17:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:35.938 20:17:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:35.938 20:17:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:35.938 20:17:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:35.938 20:17:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:35.938 20:17:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:35.938 20:17:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:35.938 20:17:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:35.938 20:17:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:35.938 20:17:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:35.938 20:17:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:06:36.199 20:17:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:06:36.199 20:17:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:06:36.199 20:17:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:06:36.199 20:17:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:36.199 20:17:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:36.199 20:17:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:06:36.199 20:17:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:36.199 20:17:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:36.199 20:17:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:36.199 20:17:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:36.462 20:17:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:36.462 20:17:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:36.462 20:17:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:36.462 20:17:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:36.462 20:17:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:36.462 20:17:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:36.462 20:17:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:36.462 20:17:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:36.462 20:17:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:36.462 20:17:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:36.723 20:17:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:36.723 20:17:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:36.723 20:17:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:36.723 20:17:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:36.723 20:17:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:36.723 20:17:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:36.723 20:17:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:36.723 20:17:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:36.723 20:17:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:36.723 20:17:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:06:36.723 20:17:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:06:36.984 20:17:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:06:36.984 20:17:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:06:36.984 20:17:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:36.984 20:17:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:36.984 20:17:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:06:36.984 20:17:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:36.984 20:17:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:36.984 20:17:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:36.984 20:17:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.984 20:17:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:36.984 20:17:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:36.984 20:17:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:36.984 20:17:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:36.984 20:17:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:36.984 20:17:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:36.984 20:17:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:36.984 20:17:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:36.984 20:17:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:36.984 20:17:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:36.984 20:17:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:06:36.984 20:17:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:36.984 20:17:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:06:36.984 20:17:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:06:36.984 20:17:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.984 20:17:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:36.984 20:17:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:36.984 20:17:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:36.984 20:17:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:36.984 20:17:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:06:36.984 20:17:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.984 20:17:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:36.984 20:17:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:36.984 20:17:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:36.984 20:17:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:36.984 20:17:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:06:36.984 20:17:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:36.984 20:17:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:36.984 20:17:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:06:37.245 /dev/nbd0 00:06:37.245 20:17:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:37.245 20:17:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:37.245 20:17:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:37.245 20:17:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:37.245 20:17:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:37.245 20:17:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:37.245 20:17:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:37.245 20:17:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:37.245 20:17:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:37.245 20:17:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:37.245 20:17:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:37.245 1+0 records in 00:06:37.245 1+0 records out 00:06:37.245 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000402681 s, 10.2 MB/s 00:06:37.245 20:17:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:37.245 20:17:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:37.245 20:17:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:37.245 20:17:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:37.245 20:17:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:37.245 20:17:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:37.245 20:17:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:37.245 20:17:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:06:37.506 /dev/nbd1 00:06:37.506 20:17:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:37.506 20:17:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:37.506 20:17:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:37.506 20:17:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:37.506 20:17:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:37.506 20:17:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:37.506 20:17:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:37.506 20:17:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:37.506 20:17:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:37.506 20:17:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:37.506 20:17:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:37.506 1+0 records in 00:06:37.506 1+0 records out 00:06:37.506 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000329255 s, 12.4 MB/s 00:06:37.506 20:17:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:37.506 20:17:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:37.506 20:17:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:37.506 20:17:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:37.506 20:17:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:37.506 20:17:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:37.506 20:17:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:37.506 20:17:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:06:37.766 /dev/nbd10 00:06:37.766 20:17:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:06:37.766 20:17:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:06:37.766 20:17:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:06:37.766 20:17:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:37.766 20:17:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:37.766 20:17:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:37.766 20:17:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:06:37.766 20:17:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:37.766 20:17:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:37.766 20:17:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:37.766 20:17:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:37.766 1+0 records in 00:06:37.766 1+0 records out 00:06:37.766 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000300664 s, 13.6 MB/s 00:06:37.766 20:17:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:37.766 20:17:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:37.766 20:17:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:37.766 20:17:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:37.766 20:17:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:37.766 20:17:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:37.766 20:17:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:37.766 20:17:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:06:38.024 /dev/nbd11 00:06:38.024 20:17:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:06:38.024 20:17:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:06:38.024 20:17:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:06:38.024 20:17:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:38.024 20:17:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:38.024 20:17:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:38.024 20:17:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:06:38.024 20:17:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:38.024 20:17:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:38.024 20:17:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:38.024 20:17:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:38.024 1+0 records in 00:06:38.024 1+0 records out 00:06:38.024 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000518806 s, 7.9 MB/s 00:06:38.024 20:17:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:38.024 20:17:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:38.024 20:17:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:38.024 20:17:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:38.024 20:17:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:38.024 20:17:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:38.024 20:17:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:38.024 20:17:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:06:38.283 /dev/nbd12 00:06:38.283 20:17:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:06:38.283 20:17:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:06:38.283 20:17:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:06:38.283 20:17:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:38.283 20:17:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:38.283 20:17:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:38.283 20:17:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:06:38.283 20:17:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:38.283 20:17:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:38.283 20:17:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:38.283 20:17:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:38.283 1+0 records in 00:06:38.283 1+0 records out 00:06:38.283 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000419295 s, 9.8 MB/s 00:06:38.283 20:17:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:38.283 20:17:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:38.283 20:17:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:38.283 20:17:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:38.283 20:17:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:38.283 20:17:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:38.283 20:17:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:38.283 20:17:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:06:38.541 /dev/nbd13 00:06:38.541 20:17:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:06:38.541 20:17:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:06:38.541 20:17:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:06:38.541 20:17:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:38.541 20:17:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:38.541 20:17:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:38.541 20:17:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:06:38.541 20:17:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:38.541 20:17:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:38.541 20:17:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:38.541 20:17:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:38.541 1+0 records in 00:06:38.541 1+0 records out 00:06:38.541 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000354892 s, 11.5 MB/s 00:06:38.541 20:17:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:38.541 20:17:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:38.541 20:17:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:38.541 20:17:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:38.541 20:17:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:38.541 20:17:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:38.541 20:17:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:38.541 20:17:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:06:38.799 /dev/nbd14 00:06:38.799 20:17:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:06:38.799 20:17:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:06:38.799 20:17:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:06:38.799 20:17:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:38.799 20:17:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:38.799 20:17:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:38.799 20:17:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:06:38.799 20:17:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:38.799 20:17:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:38.799 20:17:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:38.799 20:17:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:38.799 1+0 records in 00:06:38.799 1+0 records out 00:06:38.799 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000455029 s, 9.0 MB/s 00:06:38.799 20:17:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:38.799 20:17:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:38.799 20:17:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:38.799 20:17:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:38.799 20:17:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:38.799 20:17:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:38.799 20:17:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:38.799 20:17:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:38.799 20:17:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.799 20:17:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:39.056 20:17:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:39.056 { 00:06:39.056 "nbd_device": "/dev/nbd0", 00:06:39.056 "bdev_name": "Nvme0n1" 00:06:39.056 }, 00:06:39.056 { 00:06:39.056 "nbd_device": "/dev/nbd1", 00:06:39.056 "bdev_name": "Nvme1n1p1" 00:06:39.056 }, 00:06:39.056 { 00:06:39.056 "nbd_device": "/dev/nbd10", 00:06:39.056 "bdev_name": "Nvme1n1p2" 00:06:39.056 }, 00:06:39.056 { 00:06:39.056 "nbd_device": "/dev/nbd11", 00:06:39.056 "bdev_name": "Nvme2n1" 00:06:39.056 }, 00:06:39.056 { 00:06:39.056 "nbd_device": "/dev/nbd12", 00:06:39.056 "bdev_name": "Nvme2n2" 00:06:39.056 }, 00:06:39.056 { 00:06:39.056 "nbd_device": "/dev/nbd13", 00:06:39.056 "bdev_name": "Nvme2n3" 00:06:39.056 }, 00:06:39.056 { 00:06:39.056 "nbd_device": "/dev/nbd14", 00:06:39.056 "bdev_name": "Nvme3n1" 00:06:39.056 } 00:06:39.056 ]' 00:06:39.056 20:17:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:39.056 { 00:06:39.056 "nbd_device": "/dev/nbd0", 00:06:39.056 "bdev_name": "Nvme0n1" 00:06:39.056 }, 00:06:39.056 { 00:06:39.057 "nbd_device": "/dev/nbd1", 00:06:39.057 "bdev_name": "Nvme1n1p1" 00:06:39.057 }, 00:06:39.057 { 00:06:39.057 "nbd_device": "/dev/nbd10", 00:06:39.057 "bdev_name": "Nvme1n1p2" 00:06:39.057 }, 00:06:39.057 { 00:06:39.057 "nbd_device": "/dev/nbd11", 00:06:39.057 "bdev_name": "Nvme2n1" 00:06:39.057 }, 00:06:39.057 { 00:06:39.057 "nbd_device": "/dev/nbd12", 00:06:39.057 "bdev_name": "Nvme2n2" 00:06:39.057 }, 00:06:39.057 { 00:06:39.057 "nbd_device": "/dev/nbd13", 00:06:39.057 "bdev_name": "Nvme2n3" 00:06:39.057 }, 00:06:39.057 { 00:06:39.057 "nbd_device": "/dev/nbd14", 00:06:39.057 "bdev_name": "Nvme3n1" 00:06:39.057 } 00:06:39.057 ]' 00:06:39.057 20:17:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:39.057 20:17:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:39.057 /dev/nbd1 00:06:39.057 /dev/nbd10 00:06:39.057 /dev/nbd11 00:06:39.057 /dev/nbd12 00:06:39.057 /dev/nbd13 00:06:39.057 /dev/nbd14' 00:06:39.057 20:17:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:39.057 /dev/nbd1 00:06:39.057 /dev/nbd10 00:06:39.057 /dev/nbd11 00:06:39.057 /dev/nbd12 00:06:39.057 /dev/nbd13 00:06:39.057 /dev/nbd14' 00:06:39.057 20:17:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:39.057 20:17:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:06:39.057 20:17:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:06:39.057 20:17:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:06:39.057 20:17:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:06:39.057 20:17:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:06:39.057 20:17:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:39.057 20:17:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:39.057 20:17:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:39.057 20:17:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:39.057 20:17:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:39.057 20:17:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:06:39.057 256+0 records in 00:06:39.057 256+0 records out 00:06:39.057 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00714148 s, 147 MB/s 00:06:39.057 20:17:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:39.057 20:17:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:39.057 256+0 records in 00:06:39.057 256+0 records out 00:06:39.057 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0653248 s, 16.1 MB/s 00:06:39.057 20:17:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:39.057 20:17:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:39.057 256+0 records in 00:06:39.057 256+0 records out 00:06:39.057 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0648099 s, 16.2 MB/s 00:06:39.057 20:17:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:39.057 20:17:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:06:39.314 256+0 records in 00:06:39.314 256+0 records out 00:06:39.314 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.070798 s, 14.8 MB/s 00:06:39.314 20:17:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:39.314 20:17:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:06:39.314 256+0 records in 00:06:39.314 256+0 records out 00:06:39.314 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0660545 s, 15.9 MB/s 00:06:39.314 20:17:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:39.314 20:17:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:06:39.314 256+0 records in 00:06:39.314 256+0 records out 00:06:39.314 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0591996 s, 17.7 MB/s 00:06:39.314 20:17:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:39.314 20:17:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:06:39.314 256+0 records in 00:06:39.314 256+0 records out 00:06:39.314 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0629888 s, 16.6 MB/s 00:06:39.314 20:17:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:39.314 20:17:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:06:39.574 256+0 records in 00:06:39.574 256+0 records out 00:06:39.574 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.059321 s, 17.7 MB/s 00:06:39.574 20:17:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:06:39.574 20:17:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:39.574 20:17:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:39.574 20:17:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:39.574 20:17:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:39.574 20:17:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:39.574 20:17:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:39.574 20:17:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:39.574 20:17:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:06:39.574 20:17:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:39.574 20:17:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:06:39.574 20:17:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:39.574 20:17:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:06:39.574 20:17:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:39.574 20:17:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:06:39.574 20:17:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:39.574 20:17:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:06:39.574 20:17:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:39.574 20:17:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:06:39.574 20:17:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:39.574 20:17:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:06:39.574 20:17:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:39.574 20:17:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:06:39.574 20:17:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.574 20:17:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:39.574 20:17:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:39.574 20:17:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:39.574 20:17:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:39.574 20:17:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:39.574 20:17:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:39.574 20:17:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:39.574 20:17:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:39.574 20:17:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:39.574 20:17:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:39.574 20:17:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:39.574 20:17:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:39.574 20:17:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:39.574 20:17:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:39.574 20:17:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:39.836 20:17:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:39.836 20:17:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:39.836 20:17:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:39.836 20:17:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:39.836 20:17:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:39.836 20:17:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:39.836 20:17:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:39.836 20:17:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:39.836 20:17:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:39.836 20:17:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:06:40.097 20:17:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:06:40.097 20:17:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:06:40.097 20:17:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:06:40.097 20:17:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:40.097 20:17:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:40.097 20:17:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:06:40.098 20:17:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:40.098 20:17:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:40.098 20:17:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:40.098 20:17:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:06:40.358 20:17:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:06:40.358 20:17:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:06:40.358 20:17:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:06:40.358 20:17:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:40.358 20:17:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:40.358 20:17:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:06:40.358 20:17:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:40.358 20:17:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:40.358 20:17:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:40.358 20:17:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:06:40.619 20:17:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:06:40.619 20:17:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:06:40.619 20:17:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:06:40.619 20:17:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:40.619 20:17:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:40.619 20:17:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:06:40.619 20:17:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:40.619 20:17:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:40.619 20:17:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:40.619 20:17:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:06:40.880 20:17:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:06:40.880 20:17:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:06:40.880 20:17:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:06:40.880 20:17:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:40.880 20:17:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:40.880 20:17:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:06:40.880 20:17:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:40.880 20:17:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:40.880 20:17:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:40.880 20:17:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:06:40.880 20:17:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:06:40.880 20:17:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:06:40.880 20:17:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:06:40.880 20:17:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:40.880 20:17:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:40.880 20:17:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:06:40.880 20:17:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:40.880 20:17:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:40.880 20:17:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:40.880 20:17:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.880 20:17:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:41.141 20:17:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:41.141 20:17:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:41.141 20:17:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:41.141 20:17:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:41.141 20:17:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:41.141 20:17:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:41.141 20:17:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:41.141 20:17:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:41.141 20:17:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:41.141 20:17:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:06:41.141 20:17:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:41.141 20:17:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:06:41.141 20:17:25 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:41.141 20:17:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.141 20:17:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:06:41.141 20:17:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:06:41.400 malloc_lvol_verify 00:06:41.400 20:17:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:06:41.660 85e142eb-6eaa-4266-8063-f33da71b41c6 00:06:41.660 20:17:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:06:41.931 32e628cb-190d-40de-a7da-08f4531d1986 00:06:41.931 20:17:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:06:41.931 /dev/nbd0 00:06:41.932 20:17:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:06:41.932 20:17:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:06:41.932 20:17:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:06:41.932 20:17:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:06:41.932 20:17:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:06:41.932 mke2fs 1.47.0 (5-Feb-2023) 00:06:41.932 Discarding device blocks: 0/4096 done 00:06:41.932 Creating filesystem with 4096 1k blocks and 1024 inodes 00:06:41.932 00:06:41.932 Allocating group tables: 0/1 done 00:06:41.932 Writing inode tables: 0/1 done 00:06:41.932 Creating journal (1024 blocks): done 00:06:41.932 Writing superblocks and filesystem accounting information: 0/1 done 00:06:41.932 00:06:41.932 20:17:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:41.932 20:17:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.932 20:17:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:41.932 20:17:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:41.932 20:17:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:41.932 20:17:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:41.932 20:17:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:42.192 20:17:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:42.192 20:17:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:42.192 20:17:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:42.192 20:17:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:42.192 20:17:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:42.192 20:17:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:42.192 20:17:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:42.192 20:17:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:42.192 20:17:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 63303 00:06:42.192 20:17:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 63303 ']' 00:06:42.192 20:17:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 63303 00:06:42.192 20:17:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:06:42.192 20:17:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:42.192 20:17:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63303 00:06:42.192 killing process with pid 63303 00:06:42.192 20:17:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:42.192 20:17:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:42.192 20:17:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63303' 00:06:42.192 20:17:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 63303 00:06:42.192 20:17:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 63303 00:06:43.136 20:17:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:06:43.136 00:06:43.136 real 0m10.218s 00:06:43.136 user 0m14.793s 00:06:43.136 sys 0m3.336s 00:06:43.136 20:17:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:43.136 20:17:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:43.136 ************************************ 00:06:43.136 END TEST bdev_nbd 00:06:43.136 ************************************ 00:06:43.136 skipping fio tests on NVMe due to multi-ns failures. 00:06:43.136 20:17:27 blockdev_nvme_gpt -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:06:43.136 20:17:27 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = nvme ']' 00:06:43.136 20:17:27 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = gpt ']' 00:06:43.136 20:17:27 blockdev_nvme_gpt -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:06:43.136 20:17:27 blockdev_nvme_gpt -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:43.136 20:17:27 blockdev_nvme_gpt -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:43.136 20:17:27 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:43.136 20:17:27 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:43.136 20:17:27 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:43.136 ************************************ 00:06:43.136 START TEST bdev_verify 00:06:43.136 ************************************ 00:06:43.136 20:17:27 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:43.136 [2024-12-12 20:17:27.128431] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:06:43.136 [2024-12-12 20:17:27.128562] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63704 ] 00:06:43.136 [2024-12-12 20:17:27.289848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:43.397 [2024-12-12 20:17:27.414018] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:43.397 [2024-12-12 20:17:27.414128] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.970 Running I/O for 5 seconds... 00:06:46.298 19456.00 IOPS, 76.00 MiB/s [2024-12-12T20:17:31.460Z] 21024.00 IOPS, 82.12 MiB/s [2024-12-12T20:17:32.393Z] 21674.67 IOPS, 84.67 MiB/s [2024-12-12T20:17:33.328Z] 22272.00 IOPS, 87.00 MiB/s [2024-12-12T20:17:33.328Z] 22566.40 IOPS, 88.15 MiB/s 00:06:49.100 Latency(us) 00:06:49.100 [2024-12-12T20:17:33.328Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:49.100 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:49.100 Verification LBA range: start 0x0 length 0xbd0bd 00:06:49.100 Nvme0n1 : 5.08 1625.45 6.35 0.00 0.00 78353.46 13510.50 104051.00 00:06:49.100 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:49.100 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:06:49.100 Nvme0n1 : 5.07 1552.11 6.06 0.00 0.00 81995.71 8973.39 94371.84 00:06:49.100 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:49.100 Verification LBA range: start 0x0 length 0x4ff80 00:06:49.100 Nvme1n1p1 : 5.08 1624.75 6.35 0.00 0.00 78270.10 12300.60 97194.93 00:06:49.100 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:49.100 Verification LBA range: start 0x4ff80 length 0x4ff80 00:06:49.100 Nvme1n1p1 : 5.09 1560.27 6.09 0.00 0.00 81492.91 9729.58 75013.51 00:06:49.100 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:49.100 Verification LBA range: start 0x0 length 0x4ff7f 00:06:49.100 Nvme1n1p2 : 5.10 1632.41 6.38 0.00 0.00 77876.79 9275.86 87919.06 00:06:49.100 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:49.100 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:06:49.100 Nvme1n1p2 : 5.09 1559.30 6.09 0.00 0.00 81374.92 11191.53 68964.04 00:06:49.100 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:49.100 Verification LBA range: start 0x0 length 0x80000 00:06:49.100 Nvme2n1 : 5.10 1631.99 6.37 0.00 0.00 77696.92 9729.58 73400.32 00:06:49.100 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:49.100 Verification LBA range: start 0x80000 length 0x80000 00:06:49.100 Nvme2n1 : 5.09 1558.86 6.09 0.00 0.00 81248.93 11494.01 68964.04 00:06:49.100 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:49.100 Verification LBA range: start 0x0 length 0x80000 00:06:49.100 Nvme2n2 : 5.10 1631.02 6.37 0.00 0.00 77567.97 11746.07 66947.54 00:06:49.100 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:49.100 Verification LBA range: start 0x80000 length 0x80000 00:06:49.100 Nvme2n2 : 5.09 1558.46 6.09 0.00 0.00 81097.86 11846.89 70980.53 00:06:49.100 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:49.100 Verification LBA range: start 0x0 length 0x80000 00:06:49.100 Nvme2n3 : 5.10 1630.60 6.37 0.00 0.00 77430.24 11998.13 64931.05 00:06:49.100 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:49.100 Verification LBA range: start 0x80000 length 0x80000 00:06:49.100 Nvme2n3 : 5.09 1558.04 6.09 0.00 0.00 80953.78 12149.37 72190.42 00:06:49.100 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:49.100 Verification LBA range: start 0x0 length 0x20000 00:06:49.100 Nvme3n1 : 5.10 1630.18 6.37 0.00 0.00 77305.71 12199.78 66947.54 00:06:49.100 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:49.100 Verification LBA range: start 0x20000 length 0x20000 00:06:49.100 Nvme3n1 : 5.10 1557.54 6.08 0.00 0.00 80835.22 12451.84 76223.41 00:06:49.100 [2024-12-12T20:17:33.328Z] =================================================================================================================== 00:06:49.100 [2024-12-12T20:17:33.328Z] Total : 22310.98 87.15 0.00 0.00 79494.34 8973.39 104051.00 00:06:51.025 00:06:51.025 real 0m7.719s 00:06:51.025 user 0m14.419s 00:06:51.025 sys 0m0.293s 00:06:51.025 ************************************ 00:06:51.025 END TEST bdev_verify 00:06:51.025 ************************************ 00:06:51.025 20:17:34 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:51.025 20:17:34 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:06:51.025 20:17:34 blockdev_nvme_gpt -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:51.025 20:17:34 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:51.025 20:17:34 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.025 20:17:34 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:51.025 ************************************ 00:06:51.025 START TEST bdev_verify_big_io 00:06:51.025 ************************************ 00:06:51.025 20:17:34 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:51.025 [2024-12-12 20:17:34.880856] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:06:51.025 [2024-12-12 20:17:34.880944] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63802 ] 00:06:51.025 [2024-12-12 20:17:35.033738] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:51.025 [2024-12-12 20:17:35.132404] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.025 [2024-12-12 20:17:35.132452] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:51.592 Running I/O for 5 seconds... 00:06:57.687 1027.00 IOPS, 64.19 MiB/s [2024-12-12T20:17:42.177Z] 2482.00 IOPS, 155.12 MiB/s [2024-12-12T20:17:42.177Z] 3056.67 IOPS, 191.04 MiB/s 00:06:57.949 Latency(us) 00:06:57.949 [2024-12-12T20:17:42.177Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:57.949 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:57.949 Verification LBA range: start 0x0 length 0xbd0b 00:06:57.949 Nvme0n1 : 5.78 84.83 5.30 0.00 0.00 1432930.85 22786.36 1438968.91 00:06:57.949 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:57.949 Verification LBA range: start 0xbd0b length 0xbd0b 00:06:57.949 Nvme0n1 : 5.89 104.35 6.52 0.00 0.00 1159376.02 17845.96 1445421.69 00:06:57.949 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:57.949 Verification LBA range: start 0x0 length 0x4ff8 00:06:57.949 Nvme1n1p1 : 5.95 89.91 5.62 0.00 0.00 1303878.25 93565.24 1226027.32 00:06:57.949 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:57.949 Verification LBA range: start 0x4ff8 length 0x4ff8 00:06:57.949 Nvme1n1p1 : 5.90 102.96 6.43 0.00 0.00 1124917.34 100018.02 1232480.10 00:06:57.949 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:57.949 Verification LBA range: start 0x0 length 0x4ff7 00:06:57.949 Nvme1n1p2 : 6.04 95.37 5.96 0.00 0.00 1202826.11 85499.27 1226027.32 00:06:57.949 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:57.949 Verification LBA range: start 0x4ff7 length 0x4ff7 00:06:57.949 Nvme1n1p2 : 5.90 108.50 6.78 0.00 0.00 1052545.34 133895.09 1019538.51 00:06:57.949 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:57.949 Verification LBA range: start 0x0 length 0x8000 00:06:57.949 Nvme2n1 : 6.09 99.65 6.23 0.00 0.00 1117603.65 47992.52 1342177.28 00:06:57.949 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:57.949 Verification LBA range: start 0x8000 length 0x8000 00:06:57.949 Nvme2n1 : 6.08 115.72 7.23 0.00 0.00 957286.76 58881.58 922746.88 00:06:57.949 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:57.949 Verification LBA range: start 0x0 length 0x8000 00:06:57.949 Nvme2n2 : 6.15 104.07 6.50 0.00 0.00 1033752.26 55251.89 1290555.08 00:06:57.949 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:57.949 Verification LBA range: start 0x8000 length 0x8000 00:06:57.949 Nvme2n2 : 6.15 111.48 6.97 0.00 0.00 957578.70 66544.25 1961643.72 00:06:57.949 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:57.949 Verification LBA range: start 0x0 length 0x8000 00:06:57.949 Nvme2n3 : 6.20 106.78 6.67 0.00 0.00 968764.64 46177.67 1322818.95 00:06:57.949 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:57.949 Verification LBA range: start 0x8000 length 0x8000 00:06:57.949 Nvme2n3 : 6.27 120.11 7.51 0.00 0.00 857805.38 43152.94 2000360.37 00:06:57.949 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:57.949 Verification LBA range: start 0x0 length 0x2000 00:06:57.949 Nvme3n1 : 6.28 126.02 7.88 0.00 0.00 798671.39 825.50 1355082.83 00:06:57.949 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:57.949 Verification LBA range: start 0x2000 length 0x2000 00:06:57.949 Nvme3n1 : 6.29 137.33 8.58 0.00 0.00 728803.38 3528.86 1806777.11 00:06:57.949 [2024-12-12T20:17:42.177Z] =================================================================================================================== 00:06:57.949 [2024-12-12T20:17:42.177Z] Total : 1507.09 94.19 0.00 0.00 1023065.37 825.50 2000360.37 00:07:03.267 00:07:03.267 real 0m11.943s 00:07:03.267 user 0m20.332s 00:07:03.267 sys 0m0.282s 00:07:03.267 ************************************ 00:07:03.267 END TEST bdev_verify_big_io 00:07:03.267 ************************************ 00:07:03.267 20:17:46 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:03.267 20:17:46 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:03.267 20:17:46 blockdev_nvme_gpt -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:03.267 20:17:46 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:03.267 20:17:46 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:03.267 20:17:46 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:03.267 ************************************ 00:07:03.267 START TEST bdev_write_zeroes 00:07:03.267 ************************************ 00:07:03.267 20:17:46 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:03.267 [2024-12-12 20:17:46.915907] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:07:03.267 [2024-12-12 20:17:46.916044] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63918 ] 00:07:03.267 [2024-12-12 20:17:47.080752] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.267 [2024-12-12 20:17:47.212591] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.840 Running I/O for 1 seconds... 00:07:04.783 48384.00 IOPS, 189.00 MiB/s 00:07:04.783 Latency(us) 00:07:04.783 [2024-12-12T20:17:49.011Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:04.783 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:04.783 Nvme0n1 : 1.03 6921.34 27.04 0.00 0.00 18419.36 8368.44 26819.35 00:07:04.783 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:04.783 Nvme1n1p1 : 1.03 6912.79 27.00 0.00 0.00 18417.57 14720.39 27021.00 00:07:04.783 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:04.783 Nvme1n1p2 : 1.03 6936.74 27.10 0.00 0.00 18243.05 9830.40 25710.28 00:07:04.783 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:04.783 Nvme2n1 : 1.03 6928.65 27.07 0.00 0.00 18229.72 9880.81 25609.45 00:07:04.783 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:04.783 Nvme2n2 : 1.03 6899.62 26.95 0.00 0.00 18281.41 9578.34 25508.63 00:07:04.783 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:04.783 Nvme2n3 : 1.03 6891.74 26.92 0.00 0.00 18283.06 9275.86 26214.40 00:07:04.783 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:04.783 Nvme3n1 : 1.03 6883.93 26.89 0.00 0.00 18269.10 9175.04 27827.59 00:07:04.783 [2024-12-12T20:17:49.011Z] =================================================================================================================== 00:07:04.783 [2024-12-12T20:17:49.011Z] Total : 48374.81 188.96 0.00 0.00 18306.00 8368.44 27827.59 00:07:05.725 00:07:05.725 real 0m2.911s 00:07:05.725 user 0m2.534s 00:07:05.725 sys 0m0.251s 00:07:05.725 ************************************ 00:07:05.725 END TEST bdev_write_zeroes 00:07:05.725 ************************************ 00:07:05.725 20:17:49 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:05.725 20:17:49 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:05.725 20:17:49 blockdev_nvme_gpt -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:05.725 20:17:49 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:05.725 20:17:49 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:05.725 20:17:49 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:05.725 ************************************ 00:07:05.725 START TEST bdev_json_nonenclosed 00:07:05.725 ************************************ 00:07:05.725 20:17:49 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:05.725 [2024-12-12 20:17:49.900752] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:07:05.725 [2024-12-12 20:17:49.901310] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63971 ] 00:07:05.985 [2024-12-12 20:17:50.068782] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.985 [2024-12-12 20:17:50.206607] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.985 [2024-12-12 20:17:50.206714] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:05.985 [2024-12-12 20:17:50.206734] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:05.985 [2024-12-12 20:17:50.206744] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:06.246 00:07:06.246 real 0m0.580s 00:07:06.246 user 0m0.356s 00:07:06.246 sys 0m0.117s 00:07:06.246 20:17:50 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:06.246 ************************************ 00:07:06.246 END TEST bdev_json_nonenclosed 00:07:06.246 ************************************ 00:07:06.246 20:17:50 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:06.246 20:17:50 blockdev_nvme_gpt -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:06.246 20:17:50 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:06.246 20:17:50 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:06.246 20:17:50 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:06.246 ************************************ 00:07:06.246 START TEST bdev_json_nonarray 00:07:06.246 ************************************ 00:07:06.246 20:17:50 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:06.507 [2024-12-12 20:17:50.540353] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:07:06.507 [2024-12-12 20:17:50.540511] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64002 ] 00:07:06.507 [2024-12-12 20:17:50.705442] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.768 [2024-12-12 20:17:50.840883] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.768 [2024-12-12 20:17:50.841001] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:06.768 [2024-12-12 20:17:50.841021] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:06.768 [2024-12-12 20:17:50.841031] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:07.029 00:07:07.029 real 0m0.575s 00:07:07.029 user 0m0.359s 00:07:07.029 sys 0m0.109s 00:07:07.029 ************************************ 00:07:07.029 END TEST bdev_json_nonarray 00:07:07.029 ************************************ 00:07:07.029 20:17:51 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:07.029 20:17:51 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:07.029 20:17:51 blockdev_nvme_gpt -- bdev/blockdev.sh@824 -- # [[ gpt == bdev ]] 00:07:07.029 20:17:51 blockdev_nvme_gpt -- bdev/blockdev.sh@832 -- # [[ gpt == gpt ]] 00:07:07.029 20:17:51 blockdev_nvme_gpt -- bdev/blockdev.sh@833 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:07:07.029 20:17:51 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:07.029 20:17:51 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:07.029 20:17:51 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:07.029 ************************************ 00:07:07.029 START TEST bdev_gpt_uuid 00:07:07.029 ************************************ 00:07:07.029 20:17:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:07:07.029 20:17:51 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@651 -- # local bdev 00:07:07.029 20:17:51 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@653 -- # start_spdk_tgt 00:07:07.029 20:17:51 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=64028 00:07:07.029 20:17:51 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:07.029 20:17:51 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 64028 00:07:07.029 20:17:51 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:07.029 20:17:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 64028 ']' 00:07:07.029 20:17:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.029 20:17:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:07.029 20:17:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.029 20:17:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:07.029 20:17:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:07.029 [2024-12-12 20:17:51.197399] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:07:07.029 [2024-12-12 20:17:51.197560] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64028 ] 00:07:07.290 [2024-12-12 20:17:51.359448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.290 [2024-12-12 20:17:51.491335] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.235 20:17:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:08.235 20:17:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:07:08.235 20:17:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@655 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:08.235 20:17:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.235 20:17:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:08.496 Some configs were skipped because the RPC state that can call them passed over. 00:07:08.496 20:17:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.496 20:17:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@656 -- # rpc_cmd bdev_wait_for_examine 00:07:08.496 20:17:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.496 20:17:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:08.496 20:17:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.496 20:17:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:07:08.496 20:17:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.496 20:17:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:08.496 20:17:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.496 20:17:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # bdev='[ 00:07:08.496 { 00:07:08.496 "name": "Nvme1n1p1", 00:07:08.496 "aliases": [ 00:07:08.496 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:07:08.496 ], 00:07:08.496 "product_name": "GPT Disk", 00:07:08.496 "block_size": 4096, 00:07:08.496 "num_blocks": 655104, 00:07:08.496 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:08.496 "assigned_rate_limits": { 00:07:08.496 "rw_ios_per_sec": 0, 00:07:08.496 "rw_mbytes_per_sec": 0, 00:07:08.497 "r_mbytes_per_sec": 0, 00:07:08.497 "w_mbytes_per_sec": 0 00:07:08.497 }, 00:07:08.497 "claimed": false, 00:07:08.497 "zoned": false, 00:07:08.497 "supported_io_types": { 00:07:08.497 "read": true, 00:07:08.497 "write": true, 00:07:08.497 "unmap": true, 00:07:08.497 "flush": true, 00:07:08.497 "reset": true, 00:07:08.497 "nvme_admin": false, 00:07:08.497 "nvme_io": false, 00:07:08.497 "nvme_io_md": false, 00:07:08.497 "write_zeroes": true, 00:07:08.497 "zcopy": false, 00:07:08.497 "get_zone_info": false, 00:07:08.497 "zone_management": false, 00:07:08.497 "zone_append": false, 00:07:08.497 "compare": true, 00:07:08.497 "compare_and_write": false, 00:07:08.497 "abort": true, 00:07:08.497 "seek_hole": false, 00:07:08.497 "seek_data": false, 00:07:08.497 "copy": true, 00:07:08.497 "nvme_iov_md": false 00:07:08.497 }, 00:07:08.497 "driver_specific": { 00:07:08.497 "gpt": { 00:07:08.497 "base_bdev": "Nvme1n1", 00:07:08.497 "offset_blocks": 256, 00:07:08.497 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:07:08.497 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:08.497 "partition_name": "SPDK_TEST_first" 00:07:08.497 } 00:07:08.497 } 00:07:08.497 } 00:07:08.497 ]' 00:07:08.497 20:17:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # jq -r length 00:07:08.497 20:17:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # [[ 1 == \1 ]] 00:07:08.497 20:17:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # jq -r '.[0].aliases[0]' 00:07:08.497 20:17:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:08.497 20:17:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:08.497 20:17:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:08.497 20:17:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:08.497 20:17:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.497 20:17:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:08.497 20:17:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.497 20:17:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # bdev='[ 00:07:08.497 { 00:07:08.497 "name": "Nvme1n1p2", 00:07:08.497 "aliases": [ 00:07:08.497 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:07:08.497 ], 00:07:08.497 "product_name": "GPT Disk", 00:07:08.497 "block_size": 4096, 00:07:08.497 "num_blocks": 655103, 00:07:08.497 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:08.497 "assigned_rate_limits": { 00:07:08.497 "rw_ios_per_sec": 0, 00:07:08.497 "rw_mbytes_per_sec": 0, 00:07:08.497 "r_mbytes_per_sec": 0, 00:07:08.497 "w_mbytes_per_sec": 0 00:07:08.497 }, 00:07:08.497 "claimed": false, 00:07:08.497 "zoned": false, 00:07:08.497 "supported_io_types": { 00:07:08.497 "read": true, 00:07:08.497 "write": true, 00:07:08.497 "unmap": true, 00:07:08.497 "flush": true, 00:07:08.497 "reset": true, 00:07:08.497 "nvme_admin": false, 00:07:08.497 "nvme_io": false, 00:07:08.497 "nvme_io_md": false, 00:07:08.497 "write_zeroes": true, 00:07:08.497 "zcopy": false, 00:07:08.497 "get_zone_info": false, 00:07:08.497 "zone_management": false, 00:07:08.497 "zone_append": false, 00:07:08.497 "compare": true, 00:07:08.497 "compare_and_write": false, 00:07:08.497 "abort": true, 00:07:08.497 "seek_hole": false, 00:07:08.497 "seek_data": false, 00:07:08.497 "copy": true, 00:07:08.497 "nvme_iov_md": false 00:07:08.497 }, 00:07:08.497 "driver_specific": { 00:07:08.497 "gpt": { 00:07:08.497 "base_bdev": "Nvme1n1", 00:07:08.497 "offset_blocks": 655360, 00:07:08.497 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:07:08.497 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:08.497 "partition_name": "SPDK_TEST_second" 00:07:08.497 } 00:07:08.497 } 00:07:08.497 } 00:07:08.497 ]' 00:07:08.497 20:17:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # jq -r length 00:07:08.759 20:17:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # [[ 1 == \1 ]] 00:07:08.759 20:17:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # jq -r '.[0].aliases[0]' 00:07:08.759 20:17:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:08.759 20:17:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:08.759 20:17:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:08.759 20:17:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@668 -- # killprocess 64028 00:07:08.759 20:17:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 64028 ']' 00:07:08.759 20:17:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 64028 00:07:08.759 20:17:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:07:08.759 20:17:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:08.759 20:17:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64028 00:07:08.759 killing process with pid 64028 00:07:08.759 20:17:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:08.759 20:17:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:08.759 20:17:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64028' 00:07:08.759 20:17:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 64028 00:07:08.759 20:17:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 64028 00:07:10.675 00:07:10.675 real 0m3.363s 00:07:10.675 user 0m3.387s 00:07:10.675 sys 0m0.496s 00:07:10.675 20:17:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:10.675 ************************************ 00:07:10.675 END TEST bdev_gpt_uuid 00:07:10.675 ************************************ 00:07:10.675 20:17:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:10.675 20:17:54 blockdev_nvme_gpt -- bdev/blockdev.sh@836 -- # [[ gpt == crypto_sw ]] 00:07:10.675 20:17:54 blockdev_nvme_gpt -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:07:10.675 20:17:54 blockdev_nvme_gpt -- bdev/blockdev.sh@849 -- # cleanup 00:07:10.675 20:17:54 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:10.675 20:17:54 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:10.675 20:17:54 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:07:10.675 20:17:54 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:07:10.675 20:17:54 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:07:10.675 20:17:54 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:10.676 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:10.937 Waiting for block devices as requested 00:07:10.937 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:10.937 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:11.198 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:11.198 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:16.475 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:16.475 20:18:00 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:07:16.475 20:18:00 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:07:16.475 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:07:16.475 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:07:16.475 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:07:16.475 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:07:16.475 20:18:00 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:07:16.475 00:07:16.475 real 0m58.996s 00:07:16.475 user 1m15.017s 00:07:16.475 sys 0m7.759s 00:07:16.475 20:18:00 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:16.475 ************************************ 00:07:16.475 END TEST blockdev_nvme_gpt 00:07:16.475 ************************************ 00:07:16.475 20:18:00 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:16.736 20:18:00 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:16.736 20:18:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:16.736 20:18:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:16.736 20:18:00 -- common/autotest_common.sh@10 -- # set +x 00:07:16.736 ************************************ 00:07:16.736 START TEST nvme 00:07:16.736 ************************************ 00:07:16.736 20:18:00 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:16.736 * Looking for test storage... 00:07:16.736 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:07:16.736 20:18:00 nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:16.736 20:18:00 nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:07:16.736 20:18:00 nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:16.736 20:18:00 nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:16.736 20:18:00 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:16.736 20:18:00 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:16.736 20:18:00 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:16.736 20:18:00 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:07:16.736 20:18:00 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:07:16.736 20:18:00 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:07:16.736 20:18:00 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:07:16.736 20:18:00 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:07:16.736 20:18:00 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:07:16.736 20:18:00 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:07:16.736 20:18:00 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:16.736 20:18:00 nvme -- scripts/common.sh@344 -- # case "$op" in 00:07:16.736 20:18:00 nvme -- scripts/common.sh@345 -- # : 1 00:07:16.736 20:18:00 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:16.736 20:18:00 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:16.736 20:18:00 nvme -- scripts/common.sh@365 -- # decimal 1 00:07:16.736 20:18:00 nvme -- scripts/common.sh@353 -- # local d=1 00:07:16.736 20:18:00 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:16.736 20:18:00 nvme -- scripts/common.sh@355 -- # echo 1 00:07:16.736 20:18:00 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:07:16.736 20:18:00 nvme -- scripts/common.sh@366 -- # decimal 2 00:07:16.736 20:18:00 nvme -- scripts/common.sh@353 -- # local d=2 00:07:16.736 20:18:00 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:16.736 20:18:00 nvme -- scripts/common.sh@355 -- # echo 2 00:07:16.736 20:18:00 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:07:16.736 20:18:00 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:16.736 20:18:00 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:16.736 20:18:00 nvme -- scripts/common.sh@368 -- # return 0 00:07:16.736 20:18:00 nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:16.736 20:18:00 nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:16.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.736 --rc genhtml_branch_coverage=1 00:07:16.736 --rc genhtml_function_coverage=1 00:07:16.736 --rc genhtml_legend=1 00:07:16.736 --rc geninfo_all_blocks=1 00:07:16.736 --rc geninfo_unexecuted_blocks=1 00:07:16.736 00:07:16.736 ' 00:07:16.736 20:18:00 nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:16.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.736 --rc genhtml_branch_coverage=1 00:07:16.736 --rc genhtml_function_coverage=1 00:07:16.736 --rc genhtml_legend=1 00:07:16.736 --rc geninfo_all_blocks=1 00:07:16.736 --rc geninfo_unexecuted_blocks=1 00:07:16.736 00:07:16.736 ' 00:07:16.736 20:18:00 nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:16.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.736 --rc genhtml_branch_coverage=1 00:07:16.736 --rc genhtml_function_coverage=1 00:07:16.736 --rc genhtml_legend=1 00:07:16.736 --rc geninfo_all_blocks=1 00:07:16.736 --rc geninfo_unexecuted_blocks=1 00:07:16.736 00:07:16.736 ' 00:07:16.736 20:18:00 nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:16.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.736 --rc genhtml_branch_coverage=1 00:07:16.736 --rc genhtml_function_coverage=1 00:07:16.736 --rc genhtml_legend=1 00:07:16.736 --rc geninfo_all_blocks=1 00:07:16.736 --rc geninfo_unexecuted_blocks=1 00:07:16.736 00:07:16.736 ' 00:07:16.736 20:18:00 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:17.304 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:17.870 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:17.870 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:17.870 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:17.870 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:17.870 20:18:02 nvme -- nvme/nvme.sh@79 -- # uname 00:07:17.870 20:18:02 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:07:17.870 20:18:02 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:07:17.870 20:18:02 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:07:17.870 20:18:02 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:07:17.870 20:18:02 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:07:17.870 20:18:02 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:07:17.870 Waiting for stub to ready for secondary processes... 00:07:17.870 20:18:02 nvme -- common/autotest_common.sh@1075 -- # stubpid=64666 00:07:17.870 20:18:02 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:07:17.870 20:18:02 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:07:17.870 20:18:02 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:17.870 20:18:02 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/64666 ]] 00:07:17.870 20:18:02 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:07:17.870 [2024-12-12 20:18:02.090693] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:07:17.870 [2024-12-12 20:18:02.090804] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:07:18.803 [2024-12-12 20:18:02.846627] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:18.803 [2024-12-12 20:18:02.943695] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:07:18.803 [2024-12-12 20:18:02.943967] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:18.803 [2024-12-12 20:18:02.943984] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:07:18.803 [2024-12-12 20:18:02.959263] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:07:18.803 [2024-12-12 20:18:02.959296] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:18.803 [2024-12-12 20:18:02.973555] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:07:18.803 [2024-12-12 20:18:02.973720] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:07:18.803 [2024-12-12 20:18:02.977015] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:18.803 [2024-12-12 20:18:02.977300] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:07:18.803 [2024-12-12 20:18:02.977394] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:07:18.803 [2024-12-12 20:18:02.981032] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:18.803 [2024-12-12 20:18:02.981312] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:07:18.803 [2024-12-12 20:18:02.981491] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:07:18.803 [2024-12-12 20:18:02.985601] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:18.803 [2024-12-12 20:18:02.985783] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:07:18.803 [2024-12-12 20:18:02.985828] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:07:18.803 [2024-12-12 20:18:02.985864] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:07:18.803 [2024-12-12 20:18:02.985894] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:07:19.060 done. 00:07:19.060 20:18:03 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:19.060 20:18:03 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:07:19.060 20:18:03 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:19.060 20:18:03 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:07:19.060 20:18:03 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:19.060 20:18:03 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:19.060 ************************************ 00:07:19.060 START TEST nvme_reset 00:07:19.060 ************************************ 00:07:19.060 20:18:03 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:19.318 Initializing NVMe Controllers 00:07:19.318 Skipping QEMU NVMe SSD at 0000:00:13.0 00:07:19.318 Skipping QEMU NVMe SSD at 0000:00:10.0 00:07:19.318 Skipping QEMU NVMe SSD at 0000:00:11.0 00:07:19.318 Skipping QEMU NVMe SSD at 0000:00:12.0 00:07:19.318 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:07:19.318 00:07:19.318 real 0m0.224s 00:07:19.318 user 0m0.078s 00:07:19.318 sys 0m0.094s 00:07:19.318 20:18:03 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:19.318 ************************************ 00:07:19.318 END TEST nvme_reset 00:07:19.318 ************************************ 00:07:19.318 20:18:03 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:07:19.318 20:18:03 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:07:19.318 20:18:03 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:19.318 20:18:03 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:19.318 20:18:03 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:19.318 ************************************ 00:07:19.318 START TEST nvme_identify 00:07:19.318 ************************************ 00:07:19.318 20:18:03 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:07:19.318 20:18:03 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:07:19.318 20:18:03 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:07:19.318 20:18:03 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:07:19.318 20:18:03 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:07:19.318 20:18:03 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:19.318 20:18:03 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:07:19.318 20:18:03 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:19.318 20:18:03 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:19.318 20:18:03 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:19.318 20:18:03 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:19.318 20:18:03 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:19.318 20:18:03 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:07:19.579 [2024-12-12 20:18:03.615324] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 64688 terminated unexpected 00:07:19.579 ===================================================== 00:07:19.579 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:19.579 ===================================================== 00:07:19.579 Controller Capabilities/Features 00:07:19.579 ================================ 00:07:19.579 Vendor ID: 1b36 00:07:19.579 Subsystem Vendor ID: 1af4 00:07:19.579 Serial Number: 12343 00:07:19.579 Model Number: QEMU NVMe Ctrl 00:07:19.579 Firmware Version: 8.0.0 00:07:19.579 Recommended Arb Burst: 6 00:07:19.579 IEEE OUI Identifier: 00 54 52 00:07:19.579 Multi-path I/O 00:07:19.579 May have multiple subsystem ports: No 00:07:19.579 May have multiple controllers: Yes 00:07:19.579 Associated with SR-IOV VF: No 00:07:19.579 Max Data Transfer Size: 524288 00:07:19.579 Max Number of Namespaces: 256 00:07:19.579 Max Number of I/O Queues: 64 00:07:19.579 NVMe Specification Version (VS): 1.4 00:07:19.579 NVMe Specification Version (Identify): 1.4 00:07:19.579 Maximum Queue Entries: 2048 00:07:19.579 Contiguous Queues Required: Yes 00:07:19.579 Arbitration Mechanisms Supported 00:07:19.579 Weighted Round Robin: Not Supported 00:07:19.579 Vendor Specific: Not Supported 00:07:19.579 Reset Timeout: 7500 ms 00:07:19.579 Doorbell Stride: 4 bytes 00:07:19.579 NVM Subsystem Reset: Not Supported 00:07:19.579 Command Sets Supported 00:07:19.579 NVM Command Set: Supported 00:07:19.579 Boot Partition: Not Supported 00:07:19.579 Memory Page Size Minimum: 4096 bytes 00:07:19.579 Memory Page Size Maximum: 65536 bytes 00:07:19.579 Persistent Memory Region: Not Supported 00:07:19.579 Optional Asynchronous Events Supported 00:07:19.579 Namespace Attribute Notices: Supported 00:07:19.579 Firmware Activation Notices: Not Supported 00:07:19.579 ANA Change Notices: Not Supported 00:07:19.579 PLE Aggregate Log Change Notices: Not Supported 00:07:19.579 LBA Status Info Alert Notices: Not Supported 00:07:19.579 EGE Aggregate Log Change Notices: Not Supported 00:07:19.579 Normal NVM Subsystem Shutdown event: Not Supported 00:07:19.579 Zone Descriptor Change Notices: Not Supported 00:07:19.579 Discovery Log Change Notices: Not Supported 00:07:19.579 Controller Attributes 00:07:19.579 128-bit Host Identifier: Not Supported 00:07:19.579 Non-Operational Permissive Mode: Not Supported 00:07:19.579 NVM Sets: Not Supported 00:07:19.579 Read Recovery Levels: Not Supported 00:07:19.579 Endurance Groups: Supported 00:07:19.579 Predictable Latency Mode: Not Supported 00:07:19.579 Traffic Based Keep ALive: Not Supported 00:07:19.579 Namespace Granularity: Not Supported 00:07:19.579 SQ Associations: Not Supported 00:07:19.579 UUID List: Not Supported 00:07:19.579 Multi-Domain Subsystem: Not Supported 00:07:19.579 Fixed Capacity Management: Not Supported 00:07:19.579 Variable Capacity Management: Not Supported 00:07:19.579 Delete Endurance Group: Not Supported 00:07:19.579 Delete NVM Set: Not Supported 00:07:19.579 Extended LBA Formats Supported: Supported 00:07:19.579 Flexible Data Placement Supported: Supported 00:07:19.579 00:07:19.579 Controller Memory Buffer Support 00:07:19.579 ================================ 00:07:19.579 Supported: No 00:07:19.579 00:07:19.579 Persistent Memory Region Support 00:07:19.579 ================================ 00:07:19.579 Supported: No 00:07:19.579 00:07:19.579 Admin Command Set Attributes 00:07:19.579 ============================ 00:07:19.579 Security Send/Receive: Not Supported 00:07:19.579 Format NVM: Supported 00:07:19.579 Firmware Activate/Download: Not Supported 00:07:19.579 Namespace Management: Supported 00:07:19.579 Device Self-Test: Not Supported 00:07:19.579 Directives: Supported 00:07:19.579 NVMe-MI: Not Supported 00:07:19.579 Virtualization Management: Not Supported 00:07:19.579 Doorbell Buffer Config: Supported 00:07:19.579 Get LBA Status Capability: Not Supported 00:07:19.579 Command & Feature Lockdown Capability: Not Supported 00:07:19.579 Abort Command Limit: 4 00:07:19.579 Async Event Request Limit: 4 00:07:19.579 Number of Firmware Slots: N/A 00:07:19.579 Firmware Slot 1 Read-Only: N/A 00:07:19.579 Firmware Activation Without Reset: N/A 00:07:19.579 Multiple Update Detection Support: N/A 00:07:19.579 Firmware Update Granularity: No Information Provided 00:07:19.579 Per-Namespace SMART Log: Yes 00:07:19.579 Asymmetric Namespace Access Log Page: Not Supported 00:07:19.579 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:19.579 Command Effects Log Page: Supported 00:07:19.579 Get Log Page Extended Data: Supported 00:07:19.579 Telemetry Log Pages: Not Supported 00:07:19.579 Persistent Event Log Pages: Not Supported 00:07:19.579 Supported Log Pages Log Page: May Support 00:07:19.579 Commands Supported & Effects Log Page: Not Supported 00:07:19.579 Feature Identifiers & Effects Log Page:May Support 00:07:19.579 NVMe-MI Commands & Effects Log Page: May Support 00:07:19.579 Data Area 4 for Telemetry Log: Not Supported 00:07:19.579 Error Log Page Entries Supported: 1 00:07:19.579 Keep Alive: Not Supported 00:07:19.579 00:07:19.579 NVM Command Set Attributes 00:07:19.579 ========================== 00:07:19.579 Submission Queue Entry Size 00:07:19.579 Max: 64 00:07:19.579 Min: 64 00:07:19.579 Completion Queue Entry Size 00:07:19.579 Max: 16 00:07:19.579 Min: 16 00:07:19.579 Number of Namespaces: 256 00:07:19.579 Compare Command: Supported 00:07:19.579 Write Uncorrectable Command: Not Supported 00:07:19.579 Dataset Management Command: Supported 00:07:19.579 Write Zeroes Command: Supported 00:07:19.579 Set Features Save Field: Supported 00:07:19.579 Reservations: Not Supported 00:07:19.579 Timestamp: Supported 00:07:19.579 Copy: Supported 00:07:19.579 Volatile Write Cache: Present 00:07:19.579 Atomic Write Unit (Normal): 1 00:07:19.579 Atomic Write Unit (PFail): 1 00:07:19.579 Atomic Compare & Write Unit: 1 00:07:19.579 Fused Compare & Write: Not Supported 00:07:19.579 Scatter-Gather List 00:07:19.579 SGL Command Set: Supported 00:07:19.579 SGL Keyed: Not Supported 00:07:19.579 SGL Bit Bucket Descriptor: Not Supported 00:07:19.579 SGL Metadata Pointer: Not Supported 00:07:19.579 Oversized SGL: Not Supported 00:07:19.579 SGL Metadata Address: Not Supported 00:07:19.579 SGL Offset: Not Supported 00:07:19.579 Transport SGL Data Block: Not Supported 00:07:19.579 Replay Protected Memory Block: Not Supported 00:07:19.579 00:07:19.579 Firmware Slot Information 00:07:19.579 ========================= 00:07:19.579 Active slot: 1 00:07:19.579 Slot 1 Firmware Revision: 1.0 00:07:19.579 00:07:19.579 00:07:19.579 Commands Supported and Effects 00:07:19.579 ============================== 00:07:19.579 Admin Commands 00:07:19.579 -------------- 00:07:19.579 Delete I/O Submission Queue (00h): Supported 00:07:19.579 Create I/O Submission Queue (01h): Supported 00:07:19.579 Get Log Page (02h): Supported 00:07:19.579 Delete I/O Completion Queue (04h): Supported 00:07:19.579 Create I/O Completion Queue (05h): Supported 00:07:19.579 Identify (06h): Supported 00:07:19.579 Abort (08h): Supported 00:07:19.579 Set Features (09h): Supported 00:07:19.579 Get Features (0Ah): Supported 00:07:19.579 Asynchronous Event Request (0Ch): Supported 00:07:19.579 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:19.579 Directive Send (19h): Supported 00:07:19.579 Directive Receive (1Ah): Supported 00:07:19.579 Virtualization Management (1Ch): Supported 00:07:19.579 Doorbell Buffer Config (7Ch): Supported 00:07:19.579 Format NVM (80h): Supported LBA-Change 00:07:19.579 I/O Commands 00:07:19.579 ------------ 00:07:19.579 Flush (00h): Supported LBA-Change 00:07:19.579 Write (01h): Supported LBA-Change 00:07:19.579 Read (02h): Supported 00:07:19.579 Compare (05h): Supported 00:07:19.579 Write Zeroes (08h): Supported LBA-Change 00:07:19.579 Dataset Management (09h): Supported LBA-Change 00:07:19.579 Unknown (0Ch): Supported 00:07:19.579 Unknown (12h): Supported 00:07:19.579 Copy (19h): Supported LBA-Change 00:07:19.579 Unknown (1Dh): Supported LBA-Change 00:07:19.579 00:07:19.579 Error Log 00:07:19.579 ========= 00:07:19.579 00:07:19.579 Arbitration 00:07:19.579 =========== 00:07:19.579 Arbitration Burst: no limit 00:07:19.579 00:07:19.579 Power Management 00:07:19.579 ================ 00:07:19.579 Number of Power States: 1 00:07:19.579 Current Power State: Power State #0 00:07:19.579 Power State #0: 00:07:19.579 Max Power: 25.00 W 00:07:19.579 Non-Operational State: Operational 00:07:19.579 Entry Latency: 16 microseconds 00:07:19.579 Exit Latency: 4 microseconds 00:07:19.579 Relative Read Throughput: 0 00:07:19.579 Relative Read Latency: 0 00:07:19.579 Relative Write Throughput: 0 00:07:19.579 Relative Write Latency: 0 00:07:19.579 Idle Power: Not Reported 00:07:19.579 Active Power: Not Reported 00:07:19.579 Non-Operational Permissive Mode: Not Supported 00:07:19.579 00:07:19.579 Health Information 00:07:19.579 ================== 00:07:19.579 Critical Warnings: 00:07:19.580 Available Spare Space: OK 00:07:19.580 Temperature: OK 00:07:19.580 Device Reliability: OK 00:07:19.580 Read Only: No 00:07:19.580 Volatile Memory Backup: OK 00:07:19.580 Current Temperature: 323 Kelvin (50 Celsius) 00:07:19.580 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:19.580 Available Spare: 0% 00:07:19.580 Available Spare Threshold: 0% 00:07:19.580 Life Percentage Used: 0% 00:07:19.580 Data Units Read: 959 00:07:19.580 Data Units Written: 888 00:07:19.580 Host Read Commands: 42906 00:07:19.580 Host Write Commands: 42329 00:07:19.580 Controller Busy Time: 0 minutes 00:07:19.580 Power Cycles: 0 00:07:19.580 Power On Hours: 0 hours 00:07:19.580 Unsafe Shutdowns: 0 00:07:19.580 Unrecoverable Media Errors: 0 00:07:19.580 Lifetime Error Log Entries: 0 00:07:19.580 Warning Temperature Time: 0 minutes 00:07:19.580 Critical Temperature Time: 0 minutes 00:07:19.580 00:07:19.580 Number of Queues 00:07:19.580 ================ 00:07:19.580 Number of I/O Submission Queues: 64 00:07:19.580 Number of I/O Completion Queues: 64 00:07:19.580 00:07:19.580 ZNS Specific Controller Data 00:07:19.580 ============================ 00:07:19.580 Zone Append Size Limit: 0 00:07:19.580 00:07:19.580 00:07:19.580 Active Namespaces 00:07:19.580 ================= 00:07:19.580 Namespace ID:1 00:07:19.580 Error Recovery Timeout: Unlimited 00:07:19.580 Command Set Identifier: NVM (00h) 00:07:19.580 Deallocate: Supported 00:07:19.580 Deallocated/Unwritten Error: Supported 00:07:19.580 Deallocated Read Value: All 0x00 00:07:19.580 Deallocate in Write Zeroes: Not Supported 00:07:19.580 Deallocated Guard Field: 0xFFFF 00:07:19.580 Flush: Supported 00:07:19.580 Reservation: Not Supported 00:07:19.580 Namespace Sharing Capabilities: Multiple Controllers 00:07:19.580 Size (in LBAs): 262144 (1GiB) 00:07:19.580 Capacity (in LBAs): 262144 (1GiB) 00:07:19.580 Utilization (in LBAs): 262144 (1GiB) 00:07:19.580 Thin Provisioning: Not Supported 00:07:19.580 Per-NS Atomic Units: No 00:07:19.580 Maximum Single Source Range Length: 128 00:07:19.580 Maximum Copy Length: 128 00:07:19.580 Maximum Source Range Count: 128 00:07:19.580 NGUID/EUI64 Never Reused: No 00:07:19.580 Namespace Write Protected: No 00:07:19.580 Endurance group ID: 1 00:07:19.580 Number of LBA Formats: 8 00:07:19.580 Current LBA Format: LBA Format #04 00:07:19.580 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:19.580 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:19.580 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:19.580 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:19.580 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:19.580 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:19.580 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:19.580 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:19.580 00:07:19.580 Get Feature FDP: 00:07:19.580 ================ 00:07:19.580 Enabled: Yes 00:07:19.580 FDP configuration index: 0 00:07:19.580 00:07:19.580 FDP configurations log page 00:07:19.580 =========================== 00:07:19.580 Number of FDP configurations: 1 00:07:19.580 Version: 0 00:07:19.580 Size: 112 00:07:19.580 FDP Configuration Descriptor: 0 00:07:19.580 Descriptor Size: 96 00:07:19.580 Reclaim Group Identifier format: 2 00:07:19.580 FDP Volatile Write Cache: Not Present 00:07:19.580 FDP Configuration: Valid 00:07:19.580 Vendor Specific Size: 0 00:07:19.580 Number of Reclaim Groups: 2 00:07:19.580 Number of Recalim Unit Handles: 8 00:07:19.580 Max Placement Identifiers: 128 00:07:19.580 Number of Namespaces Suppprted: 256 00:07:19.580 Reclaim unit Nominal Size: 6000000 bytes 00:07:19.580 Estimated Reclaim Unit Time Limit: Not Reported 00:07:19.580 RUH Desc #000: RUH Type: Initially Isolated 00:07:19.580 RUH Desc #001: RUH Type: Initially Isolated 00:07:19.580 RUH Desc #002: RUH Type: Initially Isolated 00:07:19.580 RUH Desc #003: RUH Type: Initially Isolated 00:07:19.580 RUH Desc #004: RUH Type: Initially Isolated 00:07:19.580 RUH Desc #005: RUH Type: Initially Isolated 00:07:19.580 RUH Desc #006: RUH Type: Initially Isolated 00:07:19.580 RUH Desc #007: RUH Type: Initially Isolated 00:07:19.580 00:07:19.580 FDP reclaim unit handle usage log page 00:07:19.580 ==================================[2024-12-12 20:18:03.618044] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 64688 terminated unexpected 00:07:19.580 ==== 00:07:19.580 Number of Reclaim Unit Handles: 8 00:07:19.580 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:19.580 RUH Usage Desc #001: RUH Attributes: Unused 00:07:19.580 RUH Usage Desc #002: RUH Attributes: Unused 00:07:19.580 RUH Usage Desc #003: RUH Attributes: Unused 00:07:19.580 RUH Usage Desc #004: RUH Attributes: Unused 00:07:19.580 RUH Usage Desc #005: RUH Attributes: Unused 00:07:19.580 RUH Usage Desc #006: RUH Attributes: Unused 00:07:19.580 RUH Usage Desc #007: RUH Attributes: Unused 00:07:19.580 00:07:19.580 FDP statistics log page 00:07:19.580 ======================= 00:07:19.580 Host bytes with metadata written: 546742272 00:07:19.580 Media bytes with metadata written: 546799616 00:07:19.580 Media bytes erased: 0 00:07:19.580 00:07:19.580 FDP events log page 00:07:19.580 =================== 00:07:19.580 Number of FDP events: 0 00:07:19.580 00:07:19.580 NVM Specific Namespace Data 00:07:19.580 =========================== 00:07:19.580 Logical Block Storage Tag Mask: 0 00:07:19.580 Protection Information Capabilities: 00:07:19.580 16b Guard Protection Information Storage Tag Support: No 00:07:19.580 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:19.580 Storage Tag Check Read Support: No 00:07:19.580 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:19.580 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:19.580 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:19.580 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:19.580 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:19.580 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:19.580 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:19.580 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:19.580 ===================================================== 00:07:19.580 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:19.580 ===================================================== 00:07:19.580 Controller Capabilities/Features 00:07:19.580 ================================ 00:07:19.580 Vendor ID: 1b36 00:07:19.580 Subsystem Vendor ID: 1af4 00:07:19.580 Serial Number: 12340 00:07:19.580 Model Number: QEMU NVMe Ctrl 00:07:19.580 Firmware Version: 8.0.0 00:07:19.580 Recommended Arb Burst: 6 00:07:19.580 IEEE OUI Identifier: 00 54 52 00:07:19.580 Multi-path I/O 00:07:19.580 May have multiple subsystem ports: No 00:07:19.580 May have multiple controllers: No 00:07:19.580 Associated with SR-IOV VF: No 00:07:19.580 Max Data Transfer Size: 524288 00:07:19.580 Max Number of Namespaces: 256 00:07:19.580 Max Number of I/O Queues: 64 00:07:19.580 NVMe Specification Version (VS): 1.4 00:07:19.580 NVMe Specification Version (Identify): 1.4 00:07:19.580 Maximum Queue Entries: 2048 00:07:19.580 Contiguous Queues Required: Yes 00:07:19.580 Arbitration Mechanisms Supported 00:07:19.580 Weighted Round Robin: Not Supported 00:07:19.580 Vendor Specific: Not Supported 00:07:19.580 Reset Timeout: 7500 ms 00:07:19.580 Doorbell Stride: 4 bytes 00:07:19.580 NVM Subsystem Reset: Not Supported 00:07:19.580 Command Sets Supported 00:07:19.580 NVM Command Set: Supported 00:07:19.580 Boot Partition: Not Supported 00:07:19.580 Memory Page Size Minimum: 4096 bytes 00:07:19.580 Memory Page Size Maximum: 65536 bytes 00:07:19.580 Persistent Memory Region: Not Supported 00:07:19.580 Optional Asynchronous Events Supported 00:07:19.580 Namespace Attribute Notices: Supported 00:07:19.580 Firmware Activation Notices: Not Supported 00:07:19.580 ANA Change Notices: Not Supported 00:07:19.580 PLE Aggregate Log Change Notices: Not Supported 00:07:19.580 LBA Status Info Alert Notices: Not Supported 00:07:19.580 EGE Aggregate Log Change Notices: Not Supported 00:07:19.580 Normal NVM Subsystem Shutdown event: Not Supported 00:07:19.580 Zone Descriptor Change Notices: Not Supported 00:07:19.580 Discovery Log Change Notices: Not Supported 00:07:19.580 Controller Attributes 00:07:19.580 128-bit Host Identifier: Not Supported 00:07:19.580 Non-Operational Permissive Mode: Not Supported 00:07:19.580 NVM Sets: Not Supported 00:07:19.580 Read Recovery Levels: Not Supported 00:07:19.580 Endurance Groups: Not Supported 00:07:19.580 Predictable Latency Mode: Not Supported 00:07:19.580 Traffic Based Keep ALive: Not Supported 00:07:19.580 Namespace Granularity: Not Supported 00:07:19.580 SQ Associations: Not Supported 00:07:19.580 UUID List: Not Supported 00:07:19.580 Multi-Domain Subsystem: Not Supported 00:07:19.580 Fixed Capacity Management: Not Supported 00:07:19.580 Variable Capacity Management: Not Supported 00:07:19.580 Delete Endurance Group: Not Supported 00:07:19.581 Delete NVM Set: Not Supported 00:07:19.581 Extended LBA Formats Supported: Supported 00:07:19.581 Flexible Data Placement Supported: Not Supported 00:07:19.581 00:07:19.581 Controller Memory Buffer Support 00:07:19.581 ================================ 00:07:19.581 Supported: No 00:07:19.581 00:07:19.581 Persistent Memory Region Support 00:07:19.581 ================================ 00:07:19.581 Supported: No 00:07:19.581 00:07:19.581 Admin Command Set Attributes 00:07:19.581 ============================ 00:07:19.581 Security Send/Receive: Not Supported 00:07:19.581 Format NVM: Supported 00:07:19.581 Firmware Activate/Download: Not Supported 00:07:19.581 Namespace Management: Supported 00:07:19.581 Device Self-Test: Not Supported 00:07:19.581 Directives: Supported 00:07:19.581 NVMe-MI: Not Supported 00:07:19.581 Virtualization Management: Not Supported 00:07:19.581 Doorbell Buffer Config: Supported 00:07:19.581 Get LBA Status Capability: Not Supported 00:07:19.581 Command & Feature Lockdown Capability: Not Supported 00:07:19.581 Abort Command Limit: 4 00:07:19.581 Async Event Request Limit: 4 00:07:19.581 Number of Firmware Slots: N/A 00:07:19.581 Firmware Slot 1 Read-Only: N/A 00:07:19.581 Firmware Activation Without Reset: N/A 00:07:19.581 Multiple Update Detection Support: N/A 00:07:19.581 Firmware Update Granularity: No Information Provided 00:07:19.581 Per-Namespace SMART Log: Yes 00:07:19.581 Asymmetric Namespace Access Log Page: Not Supported 00:07:19.581 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:19.581 Command Effects Log Page: Supported 00:07:19.581 Get Log Page Extended Data: Supported 00:07:19.581 Telemetry Log Pages: Not Supported 00:07:19.581 Persistent Event Log Pages: Not Supported 00:07:19.581 Supported Log Pages Log Page: May Support 00:07:19.581 Commands Supported & Effects Log Page: Not Supported 00:07:19.581 Feature Identifiers & Effects Log Page:May Support 00:07:19.581 NVMe-MI Commands & Effects Log Page: May Support 00:07:19.581 Data Area 4 for Telemetry Log: Not Supported 00:07:19.581 Error Log Page Entries Supported: 1 00:07:19.581 Keep Alive: Not Supported 00:07:19.581 00:07:19.581 NVM Command Set Attributes 00:07:19.581 ========================== 00:07:19.581 Submission Queue Entry Size 00:07:19.581 Max: 64 00:07:19.581 Min: 64 00:07:19.581 Completion Queue Entry Size 00:07:19.581 Max: 16 00:07:19.581 Min: 16 00:07:19.581 Number of Namespaces: 256 00:07:19.581 Compare Command: Supported 00:07:19.581 Write Uncorrectable Command: Not Supported 00:07:19.581 Dataset Management Command: Supported 00:07:19.581 Write Zeroes Command: Supported 00:07:19.581 Set Features Save Field: Supported 00:07:19.581 Reservations: Not Supported 00:07:19.581 Timestamp: Supported 00:07:19.581 Copy: Supported 00:07:19.581 Volatile Write Cache: Present 00:07:19.581 Atomic Write Unit (Normal): 1 00:07:19.581 Atomic Write Unit (PFail): 1 00:07:19.581 Atomic Compare & Write Unit: 1 00:07:19.581 Fused Compare & Write: Not Supported 00:07:19.581 Scatter-Gather List 00:07:19.581 SGL Command Set: Supported 00:07:19.581 SGL Keyed: Not Supported 00:07:19.581 SGL Bit Bucket Descriptor: Not Supported 00:07:19.581 SGL Metadata Pointer: Not Supported 00:07:19.581 Oversized SGL: Not Supported 00:07:19.581 SGL Metadata Address: Not Supported 00:07:19.581 SGL Offset: Not Supported 00:07:19.581 Transport SGL Data Block: Not Supported 00:07:19.581 Replay Protected Memory Block: Not Supported 00:07:19.581 00:07:19.581 Firmware Slot Information 00:07:19.581 ========================= 00:07:19.581 Active slot: 1 00:07:19.581 Slot 1 Firmware Revision: 1.0 00:07:19.581 00:07:19.581 00:07:19.581 Commands Supported and Effects 00:07:19.581 ============================== 00:07:19.581 Admin Commands 00:07:19.581 -------------- 00:07:19.581 Delete I/O Submission Queue (00h): Supported 00:07:19.581 Create I/O Submission Queue (01h): Supported 00:07:19.581 Get Log Page (02h): Supported 00:07:19.581 Delete I/O Completion Queue (04h): Supported 00:07:19.581 Create I/O Completion Queue (05h): Supported 00:07:19.581 Identify (06h): Supported 00:07:19.581 Abort (08h): Supported 00:07:19.581 Set Features (09h): Supported 00:07:19.581 Get Features (0Ah): Supported 00:07:19.581 Asynchronous Event Request (0Ch): Supported 00:07:19.581 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:19.581 Directive Send (19h): Supported 00:07:19.581 Directive Receive (1Ah): Supported 00:07:19.581 Virtualization Management (1Ch): Supported 00:07:19.581 Doorbell Buffer Config (7Ch): Supported 00:07:19.581 Format NVM (80h): Supported LBA-Change 00:07:19.581 I/O Commands 00:07:19.581 ------------ 00:07:19.581 Flush (00h): Supported LBA-Change 00:07:19.581 Write (01h): Supported LBA-Change 00:07:19.581 Read (02h): Supported 00:07:19.581 Compare (05h): Supported 00:07:19.581 Write Zeroes (08h): Supported LBA-Change 00:07:19.581 Dataset Management (09h): Supported LBA-Change 00:07:19.581 Unknown (0Ch): Supported 00:07:19.581 Unknown (12h): Supported 00:07:19.581 Copy (19h): Supported LBA-Change 00:07:19.581 Unknown (1Dh): Supported LBA-Change 00:07:19.581 00:07:19.581 Error Log 00:07:19.581 ========= 00:07:19.581 00:07:19.581 Arbitration 00:07:19.581 =========== 00:07:19.581 Arbitration Burst: no limit 00:07:19.581 00:07:19.581 Power Management 00:07:19.581 ================ 00:07:19.581 Number of Power States: 1 00:07:19.581 Current Power State: Power State #0 00:07:19.581 Power State #0: 00:07:19.581 Max Power: 25.00 W 00:07:19.581 Non-Operational State: Operational 00:07:19.581 Entry Latency: 16 microseconds 00:07:19.581 Exit Latency: 4 microseconds 00:07:19.581 Relative Read Throughput: 0 00:07:19.581 Relative Read Latency: 0 00:07:19.581 Relative Write Throughput: 0 00:07:19.581 Relative Write Latency: 0 00:07:19.581 Idle Power: Not Reported 00:07:19.581 Active Power: Not Reported 00:07:19.581 Non-Operational Permissive Mode: Not Supported 00:07:19.581 00:07:19.581 Health Information 00:07:19.581 ================== 00:07:19.581 Critical Warnings: 00:07:19.581 Available Spare Space: OK 00:07:19.581 Temperature: OK 00:07:19.581 Device Reliability: OK 00:07:19.581 Read Only: No 00:07:19.581 Volatile Memory Backup: OK 00:07:19.581 Current Temperature: 323 Kelvin (50 Celsius) 00:07:19.581 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:19.581 Available Spare: 0% 00:07:19.581 Available Spare Threshold: 0% 00:07:19.581 Life Percentage Used: 0% 00:07:19.581 Data Units Read: 667 00:07:19.581 Data Units Written: 595 00:07:19.581 Host Read Commands: 40147 00:07:19.581 Host Write Commands: 39933 00:07:19.581 Controller Busy Time: 0 minutes 00:07:19.581 Power Cycles: 0 00:07:19.581 Power On Hours: 0 hours 00:07:19.581 Unsafe Shutdowns: 0 00:07:19.581 Unrecoverable Media Errors: 0 00:07:19.581 Lifetime Error Log Entries: 0 00:07:19.581 Warning Temperature Time: 0 minutes 00:07:19.581 Critical Temperature Time: 0 minutes 00:07:19.581 00:07:19.581 Number of Queues 00:07:19.581 ================ 00:07:19.581 Number of I/O Submission Queues: 64 00:07:19.581 Number of I/O Completion Queues: 64 00:07:19.581 00:07:19.581 ZNS Specific Controller Data 00:07:19.581 ============================ 00:07:19.581 Zone Append Size Limit: 0 00:07:19.581 00:07:19.581 00:07:19.581 Active Namespaces 00:07:19.581 ================= 00:07:19.581 Namespace ID:1 00:07:19.581 Error Recovery Timeout: Unlimited 00:07:19.581 Command Set Identifier: NVM (00h) 00:07:19.581 Deallocate: Supported 00:07:19.581 Deallocated/Unwritten Error: Supported 00:07:19.581 Deallocated Read Value: All 0x00 00:07:19.581 Deallocate in Write Zeroes: Not Supported 00:07:19.581 Deallocated Guard Field: 0xFFFF 00:07:19.581 Flush: Supported 00:07:19.581 Reservation: Not Supported 00:07:19.581 Metadata Transferred as: Separate Metadata Buffer 00:07:19.581 Namespace Sharing Capabilities: Private 00:07:19.581 Size (in LBAs): 1548666 (5GiB) 00:07:19.581 Capacity (in LBAs): 1548666 (5GiB) 00:07:19.581 Utilization (in LBAs): 1548666 (5GiB) 00:07:19.581 Thin Provisioning: Not Supported 00:07:19.581 Per-NS Atomic Units: No 00:07:19.581 Maximum Single Source Range Length: 128 00:07:19.581 Maximum Copy Length: 128 00:07:19.581 Maximum Source Range Count: 128 00:07:19.581 NGUID/EUI64 Never Reused: No 00:07:19.581 Namespace Write Protected: No 00:07:19.581 Number of LBA Formats: 8 00:07:19.581 Current LBA Format: [2024-12-12 20:18:03.619240] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 64688 terminated unexpected 00:07:19.581 LBA Format #07 00:07:19.581 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:19.581 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:19.581 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:19.581 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:19.581 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:19.581 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:19.581 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:19.581 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:19.581 00:07:19.581 NVM Specific Namespace Data 00:07:19.582 =========================== 00:07:19.582 Logical Block Storage Tag Mask: 0 00:07:19.582 Protection Information Capabilities: 00:07:19.582 16b Guard Protection Information Storage Tag Support: No 00:07:19.582 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:19.582 Storage Tag Check Read Support: No 00:07:19.582 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:19.582 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:19.582 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:19.582 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:19.582 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:19.582 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:19.582 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:19.582 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:19.582 ===================================================== 00:07:19.582 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:19.582 ===================================================== 00:07:19.582 Controller Capabilities/Features 00:07:19.582 ================================ 00:07:19.582 Vendor ID: 1b36 00:07:19.582 Subsystem Vendor ID: 1af4 00:07:19.582 Serial Number: 12341 00:07:19.582 Model Number: QEMU NVMe Ctrl 00:07:19.582 Firmware Version: 8.0.0 00:07:19.582 Recommended Arb Burst: 6 00:07:19.582 IEEE OUI Identifier: 00 54 52 00:07:19.582 Multi-path I/O 00:07:19.582 May have multiple subsystem ports: No 00:07:19.582 May have multiple controllers: No 00:07:19.582 Associated with SR-IOV VF: No 00:07:19.582 Max Data Transfer Size: 524288 00:07:19.582 Max Number of Namespaces: 256 00:07:19.582 Max Number of I/O Queues: 64 00:07:19.582 NVMe Specification Version (VS): 1.4 00:07:19.582 NVMe Specification Version (Identify): 1.4 00:07:19.582 Maximum Queue Entries: 2048 00:07:19.582 Contiguous Queues Required: Yes 00:07:19.582 Arbitration Mechanisms Supported 00:07:19.582 Weighted Round Robin: Not Supported 00:07:19.582 Vendor Specific: Not Supported 00:07:19.582 Reset Timeout: 7500 ms 00:07:19.582 Doorbell Stride: 4 bytes 00:07:19.582 NVM Subsystem Reset: Not Supported 00:07:19.582 Command Sets Supported 00:07:19.582 NVM Command Set: Supported 00:07:19.582 Boot Partition: Not Supported 00:07:19.582 Memory Page Size Minimum: 4096 bytes 00:07:19.582 Memory Page Size Maximum: 65536 bytes 00:07:19.582 Persistent Memory Region: Not Supported 00:07:19.582 Optional Asynchronous Events Supported 00:07:19.582 Namespace Attribute Notices: Supported 00:07:19.582 Firmware Activation Notices: Not Supported 00:07:19.582 ANA Change Notices: Not Supported 00:07:19.582 PLE Aggregate Log Change Notices: Not Supported 00:07:19.582 LBA Status Info Alert Notices: Not Supported 00:07:19.582 EGE Aggregate Log Change Notices: Not Supported 00:07:19.582 Normal NVM Subsystem Shutdown event: Not Supported 00:07:19.582 Zone Descriptor Change Notices: Not Supported 00:07:19.582 Discovery Log Change Notices: Not Supported 00:07:19.582 Controller Attributes 00:07:19.582 128-bit Host Identifier: Not Supported 00:07:19.582 Non-Operational Permissive Mode: Not Supported 00:07:19.582 NVM Sets: Not Supported 00:07:19.582 Read Recovery Levels: Not Supported 00:07:19.582 Endurance Groups: Not Supported 00:07:19.582 Predictable Latency Mode: Not Supported 00:07:19.582 Traffic Based Keep ALive: Not Supported 00:07:19.582 Namespace Granularity: Not Supported 00:07:19.582 SQ Associations: Not Supported 00:07:19.582 UUID List: Not Supported 00:07:19.582 Multi-Domain Subsystem: Not Supported 00:07:19.582 Fixed Capacity Management: Not Supported 00:07:19.582 Variable Capacity Management: Not Supported 00:07:19.582 Delete Endurance Group: Not Supported 00:07:19.582 Delete NVM Set: Not Supported 00:07:19.582 Extended LBA Formats Supported: Supported 00:07:19.582 Flexible Data Placement Supported: Not Supported 00:07:19.582 00:07:19.582 Controller Memory Buffer Support 00:07:19.582 ================================ 00:07:19.582 Supported: No 00:07:19.582 00:07:19.582 Persistent Memory Region Support 00:07:19.582 ================================ 00:07:19.582 Supported: No 00:07:19.582 00:07:19.582 Admin Command Set Attributes 00:07:19.582 ============================ 00:07:19.582 Security Send/Receive: Not Supported 00:07:19.582 Format NVM: Supported 00:07:19.582 Firmware Activate/Download: Not Supported 00:07:19.582 Namespace Management: Supported 00:07:19.582 Device Self-Test: Not Supported 00:07:19.582 Directives: Supported 00:07:19.582 NVMe-MI: Not Supported 00:07:19.582 Virtualization Management: Not Supported 00:07:19.582 Doorbell Buffer Config: Supported 00:07:19.582 Get LBA Status Capability: Not Supported 00:07:19.582 Command & Feature Lockdown Capability: Not Supported 00:07:19.582 Abort Command Limit: 4 00:07:19.582 Async Event Request Limit: 4 00:07:19.582 Number of Firmware Slots: N/A 00:07:19.582 Firmware Slot 1 Read-Only: N/A 00:07:19.582 Firmware Activation Without Reset: N/A 00:07:19.582 Multiple Update Detection Support: N/A 00:07:19.582 Firmware Update Granularity: No Information Provided 00:07:19.582 Per-Namespace SMART Log: Yes 00:07:19.582 Asymmetric Namespace Access Log Page: Not Supported 00:07:19.582 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:19.582 Command Effects Log Page: Supported 00:07:19.582 Get Log Page Extended Data: Supported 00:07:19.582 Telemetry Log Pages: Not Supported 00:07:19.582 Persistent Event Log Pages: Not Supported 00:07:19.582 Supported Log Pages Log Page: May Support 00:07:19.582 Commands Supported & Effects Log Page: Not Supported 00:07:19.582 Feature Identifiers & Effects Log Page:May Support 00:07:19.582 NVMe-MI Commands & Effects Log Page: May Support 00:07:19.582 Data Area 4 for Telemetry Log: Not Supported 00:07:19.582 Error Log Page Entries Supported: 1 00:07:19.582 Keep Alive: Not Supported 00:07:19.582 00:07:19.582 NVM Command Set Attributes 00:07:19.582 ========================== 00:07:19.582 Submission Queue Entry Size 00:07:19.582 Max: 64 00:07:19.582 Min: 64 00:07:19.582 Completion Queue Entry Size 00:07:19.582 Max: 16 00:07:19.582 Min: 16 00:07:19.582 Number of Namespaces: 256 00:07:19.582 Compare Command: Supported 00:07:19.582 Write Uncorrectable Command: Not Supported 00:07:19.582 Dataset Management Command: Supported 00:07:19.582 Write Zeroes Command: Supported 00:07:19.582 Set Features Save Field: Supported 00:07:19.582 Reservations: Not Supported 00:07:19.582 Timestamp: Supported 00:07:19.582 Copy: Supported 00:07:19.582 Volatile Write Cache: Present 00:07:19.582 Atomic Write Unit (Normal): 1 00:07:19.582 Atomic Write Unit (PFail): 1 00:07:19.582 Atomic Compare & Write Unit: 1 00:07:19.582 Fused Compare & Write: Not Supported 00:07:19.582 Scatter-Gather List 00:07:19.582 SGL Command Set: Supported 00:07:19.582 SGL Keyed: Not Supported 00:07:19.582 SGL Bit Bucket Descriptor: Not Supported 00:07:19.582 SGL Metadata Pointer: Not Supported 00:07:19.582 Oversized SGL: Not Supported 00:07:19.582 SGL Metadata Address: Not Supported 00:07:19.582 SGL Offset: Not Supported 00:07:19.582 Transport SGL Data Block: Not Supported 00:07:19.582 Replay Protected Memory Block: Not Supported 00:07:19.582 00:07:19.582 Firmware Slot Information 00:07:19.582 ========================= 00:07:19.582 Active slot: 1 00:07:19.582 Slot 1 Firmware Revision: 1.0 00:07:19.582 00:07:19.582 00:07:19.582 Commands Supported and Effects 00:07:19.582 ============================== 00:07:19.582 Admin Commands 00:07:19.582 -------------- 00:07:19.582 Delete I/O Submission Queue (00h): Supported 00:07:19.582 Create I/O Submission Queue (01h): Supported 00:07:19.582 Get Log Page (02h): Supported 00:07:19.582 Delete I/O Completion Queue (04h): Supported 00:07:19.582 Create I/O Completion Queue (05h): Supported 00:07:19.582 Identify (06h): Supported 00:07:19.582 Abort (08h): Supported 00:07:19.582 Set Features (09h): Supported 00:07:19.582 Get Features (0Ah): Supported 00:07:19.582 Asynchronous Event Request (0Ch): Supported 00:07:19.582 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:19.582 Directive Send (19h): Supported 00:07:19.582 Directive Receive (1Ah): Supported 00:07:19.582 Virtualization Management (1Ch): Supported 00:07:19.582 Doorbell Buffer Config (7Ch): Supported 00:07:19.582 Format NVM (80h): Supported LBA-Change 00:07:19.582 I/O Commands 00:07:19.582 ------------ 00:07:19.582 Flush (00h): Supported LBA-Change 00:07:19.582 Write (01h): Supported LBA-Change 00:07:19.582 Read (02h): Supported 00:07:19.582 Compare (05h): Supported 00:07:19.582 Write Zeroes (08h): Supported LBA-Change 00:07:19.582 Dataset Management (09h): Supported LBA-Change 00:07:19.582 Unknown (0Ch): Supported 00:07:19.582 Unknown (12h): Supported 00:07:19.582 Copy (19h): Supported LBA-Change 00:07:19.582 Unknown (1Dh): Supported LBA-Change 00:07:19.582 00:07:19.582 Error Log 00:07:19.582 ========= 00:07:19.582 00:07:19.582 Arbitration 00:07:19.582 =========== 00:07:19.582 Arbitration Burst: no limit 00:07:19.583 00:07:19.583 Power Management 00:07:19.583 ================ 00:07:19.583 Number of Power States: 1 00:07:19.583 Current Power State: Power State #0 00:07:19.583 Power State #0: 00:07:19.583 Max Power: 25.00 W 00:07:19.583 Non-Operational State: Operational 00:07:19.583 Entry Latency: 16 microseconds 00:07:19.583 Exit Latency: 4 microseconds 00:07:19.583 Relative Read Throughput: 0 00:07:19.583 Relative Read Latency: 0 00:07:19.583 Relative Write Throughput: 0 00:07:19.583 Relative Write Latency: 0 00:07:19.583 Idle Power: Not Reported 00:07:19.583 Active Power: Not Reported 00:07:19.583 Non-Operational Permissive Mode: Not Supported 00:07:19.583 00:07:19.583 Health Information 00:07:19.583 ================== 00:07:19.583 Critical Warnings: 00:07:19.583 Available Spare Space: OK 00:07:19.583 Temperature: OK 00:07:19.583 Device Reliability: OK 00:07:19.583 Read Only: No 00:07:19.583 Volatile Memory Backup: OK 00:07:19.583 Current Temperature: 323 Kelvin (50 Celsius) 00:07:19.583 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:19.583 Available Spare: 0% 00:07:19.583 Available Spare Threshold: 0% 00:07:19.583 Life Percentage Used: 0% 00:07:19.583 Data Units Read: 1018 00:07:19.583 Data Units Written: 891 00:07:19.583 Host Read Commands: 58659 00:07:19.583 Host Write Commands: 57553 00:07:19.583 Controller Busy Time: 0 minutes 00:07:19.583 Power Cycles: 0 00:07:19.583 Power On Hours: 0 hours 00:07:19.583 Unsafe Shutdowns: 0 00:07:19.583 Unrecoverable Media Errors: 0 00:07:19.583 Lifetime Error Log Entries: 0 00:07:19.583 Warning Temperature Time: 0 minutes 00:07:19.583 Critical Temperature Time: 0 minutes 00:07:19.583 00:07:19.583 Number of Queues 00:07:19.583 ================ 00:07:19.583 Number of I/O Submission Queues: 64 00:07:19.583 Number of I/O Completion Queues: 64 00:07:19.583 00:07:19.583 ZNS Specific Controller Data 00:07:19.583 ============================ 00:07:19.583 Zone Append Size Limit: 0 00:07:19.583 00:07:19.583 00:07:19.583 Active Namespaces 00:07:19.583 ================= 00:07:19.583 Namespace ID:1 00:07:19.583 Error Recovery Timeout: Unlimited 00:07:19.583 Command Set Identifier: NVM (00h) 00:07:19.583 Deallocate: Supported 00:07:19.583 Deallocated/Unwritten Error: Supported 00:07:19.583 Deallocated Read Value: All 0x00 00:07:19.583 Deallocate in Write Zeroes: Not Supported 00:07:19.583 Deallocated Guard Field: 0xFFFF 00:07:19.583 Flush: Supported 00:07:19.583 Reservation: Not Supported 00:07:19.583 Namespace Sharing Capabilities: Private 00:07:19.583 Size (in LBAs): 1310720 (5GiB) 00:07:19.583 Capacity (in LBAs): 1310720 (5GiB) 00:07:19.583 Utilization (in LBAs): 1310720 (5GiB) 00:07:19.583 Thin Provisioning: Not Supported 00:07:19.583 Per-NS Atomic Units: No 00:07:19.583 Maximum Single Source Range Length: 128 00:07:19.583 Maximum Copy Length: 128 00:07:19.583 Maximum Source Range Count: 128 00:07:19.583 NGUID/EUI64 Never Reused: No 00:07:19.583 Namespace Write Protected: No 00:07:19.583 Number of LBA Formats: 8 00:07:19.583 Current LBA Format: LBA Format #04 00:07:19.583 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:19.583 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:19.583 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:19.583 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:19.583 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:19.583 LBA Forma[2024-12-12 20:18:03.620869] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 64688 terminated unexpected 00:07:19.583 t #05: Data Size: 4096 Metadata Size: 8 00:07:19.583 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:19.583 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:19.583 00:07:19.583 NVM Specific Namespace Data 00:07:19.583 =========================== 00:07:19.583 Logical Block Storage Tag Mask: 0 00:07:19.583 Protection Information Capabilities: 00:07:19.583 16b Guard Protection Information Storage Tag Support: No 00:07:19.583 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:19.583 Storage Tag Check Read Support: No 00:07:19.583 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:19.583 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:19.583 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:19.583 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:19.583 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:19.583 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:19.583 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:19.583 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:19.583 ===================================================== 00:07:19.583 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:19.583 ===================================================== 00:07:19.583 Controller Capabilities/Features 00:07:19.583 ================================ 00:07:19.583 Vendor ID: 1b36 00:07:19.583 Subsystem Vendor ID: 1af4 00:07:19.583 Serial Number: 12342 00:07:19.583 Model Number: QEMU NVMe Ctrl 00:07:19.583 Firmware Version: 8.0.0 00:07:19.583 Recommended Arb Burst: 6 00:07:19.583 IEEE OUI Identifier: 00 54 52 00:07:19.583 Multi-path I/O 00:07:19.583 May have multiple subsystem ports: No 00:07:19.583 May have multiple controllers: No 00:07:19.583 Associated with SR-IOV VF: No 00:07:19.583 Max Data Transfer Size: 524288 00:07:19.583 Max Number of Namespaces: 256 00:07:19.583 Max Number of I/O Queues: 64 00:07:19.583 NVMe Specification Version (VS): 1.4 00:07:19.583 NVMe Specification Version (Identify): 1.4 00:07:19.583 Maximum Queue Entries: 2048 00:07:19.583 Contiguous Queues Required: Yes 00:07:19.583 Arbitration Mechanisms Supported 00:07:19.583 Weighted Round Robin: Not Supported 00:07:19.583 Vendor Specific: Not Supported 00:07:19.583 Reset Timeout: 7500 ms 00:07:19.583 Doorbell Stride: 4 bytes 00:07:19.583 NVM Subsystem Reset: Not Supported 00:07:19.583 Command Sets Supported 00:07:19.583 NVM Command Set: Supported 00:07:19.583 Boot Partition: Not Supported 00:07:19.583 Memory Page Size Minimum: 4096 bytes 00:07:19.583 Memory Page Size Maximum: 65536 bytes 00:07:19.583 Persistent Memory Region: Not Supported 00:07:19.583 Optional Asynchronous Events Supported 00:07:19.583 Namespace Attribute Notices: Supported 00:07:19.583 Firmware Activation Notices: Not Supported 00:07:19.583 ANA Change Notices: Not Supported 00:07:19.583 PLE Aggregate Log Change Notices: Not Supported 00:07:19.583 LBA Status Info Alert Notices: Not Supported 00:07:19.583 EGE Aggregate Log Change Notices: Not Supported 00:07:19.583 Normal NVM Subsystem Shutdown event: Not Supported 00:07:19.583 Zone Descriptor Change Notices: Not Supported 00:07:19.583 Discovery Log Change Notices: Not Supported 00:07:19.583 Controller Attributes 00:07:19.583 128-bit Host Identifier: Not Supported 00:07:19.583 Non-Operational Permissive Mode: Not Supported 00:07:19.583 NVM Sets: Not Supported 00:07:19.583 Read Recovery Levels: Not Supported 00:07:19.583 Endurance Groups: Not Supported 00:07:19.583 Predictable Latency Mode: Not Supported 00:07:19.583 Traffic Based Keep ALive: Not Supported 00:07:19.583 Namespace Granularity: Not Supported 00:07:19.583 SQ Associations: Not Supported 00:07:19.583 UUID List: Not Supported 00:07:19.583 Multi-Domain Subsystem: Not Supported 00:07:19.583 Fixed Capacity Management: Not Supported 00:07:19.583 Variable Capacity Management: Not Supported 00:07:19.583 Delete Endurance Group: Not Supported 00:07:19.583 Delete NVM Set: Not Supported 00:07:19.583 Extended LBA Formats Supported: Supported 00:07:19.583 Flexible Data Placement Supported: Not Supported 00:07:19.583 00:07:19.584 Controller Memory Buffer Support 00:07:19.584 ================================ 00:07:19.584 Supported: No 00:07:19.584 00:07:19.584 Persistent Memory Region Support 00:07:19.584 ================================ 00:07:19.584 Supported: No 00:07:19.584 00:07:19.584 Admin Command Set Attributes 00:07:19.584 ============================ 00:07:19.584 Security Send/Receive: Not Supported 00:07:19.584 Format NVM: Supported 00:07:19.584 Firmware Activate/Download: Not Supported 00:07:19.584 Namespace Management: Supported 00:07:19.584 Device Self-Test: Not Supported 00:07:19.584 Directives: Supported 00:07:19.584 NVMe-MI: Not Supported 00:07:19.584 Virtualization Management: Not Supported 00:07:19.584 Doorbell Buffer Config: Supported 00:07:19.584 Get LBA Status Capability: Not Supported 00:07:19.584 Command & Feature Lockdown Capability: Not Supported 00:07:19.584 Abort Command Limit: 4 00:07:19.584 Async Event Request Limit: 4 00:07:19.584 Number of Firmware Slots: N/A 00:07:19.584 Firmware Slot 1 Read-Only: N/A 00:07:19.584 Firmware Activation Without Reset: N/A 00:07:19.584 Multiple Update Detection Support: N/A 00:07:19.584 Firmware Update Granularity: No Information Provided 00:07:19.584 Per-Namespace SMART Log: Yes 00:07:19.584 Asymmetric Namespace Access Log Page: Not Supported 00:07:19.584 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:19.584 Command Effects Log Page: Supported 00:07:19.584 Get Log Page Extended Data: Supported 00:07:19.584 Telemetry Log Pages: Not Supported 00:07:19.584 Persistent Event Log Pages: Not Supported 00:07:19.584 Supported Log Pages Log Page: May Support 00:07:19.584 Commands Supported & Effects Log Page: Not Supported 00:07:19.584 Feature Identifiers & Effects Log Page:May Support 00:07:19.584 NVMe-MI Commands & Effects Log Page: May Support 00:07:19.584 Data Area 4 for Telemetry Log: Not Supported 00:07:19.584 Error Log Page Entries Supported: 1 00:07:19.584 Keep Alive: Not Supported 00:07:19.584 00:07:19.584 NVM Command Set Attributes 00:07:19.584 ========================== 00:07:19.584 Submission Queue Entry Size 00:07:19.584 Max: 64 00:07:19.584 Min: 64 00:07:19.584 Completion Queue Entry Size 00:07:19.584 Max: 16 00:07:19.584 Min: 16 00:07:19.584 Number of Namespaces: 256 00:07:19.584 Compare Command: Supported 00:07:19.584 Write Uncorrectable Command: Not Supported 00:07:19.584 Dataset Management Command: Supported 00:07:19.584 Write Zeroes Command: Supported 00:07:19.584 Set Features Save Field: Supported 00:07:19.584 Reservations: Not Supported 00:07:19.584 Timestamp: Supported 00:07:19.584 Copy: Supported 00:07:19.584 Volatile Write Cache: Present 00:07:19.584 Atomic Write Unit (Normal): 1 00:07:19.584 Atomic Write Unit (PFail): 1 00:07:19.584 Atomic Compare & Write Unit: 1 00:07:19.584 Fused Compare & Write: Not Supported 00:07:19.584 Scatter-Gather List 00:07:19.584 SGL Command Set: Supported 00:07:19.584 SGL Keyed: Not Supported 00:07:19.584 SGL Bit Bucket Descriptor: Not Supported 00:07:19.584 SGL Metadata Pointer: Not Supported 00:07:19.584 Oversized SGL: Not Supported 00:07:19.584 SGL Metadata Address: Not Supported 00:07:19.584 SGL Offset: Not Supported 00:07:19.584 Transport SGL Data Block: Not Supported 00:07:19.584 Replay Protected Memory Block: Not Supported 00:07:19.584 00:07:19.584 Firmware Slot Information 00:07:19.584 ========================= 00:07:19.584 Active slot: 1 00:07:19.584 Slot 1 Firmware Revision: 1.0 00:07:19.584 00:07:19.584 00:07:19.584 Commands Supported and Effects 00:07:19.584 ============================== 00:07:19.584 Admin Commands 00:07:19.584 -------------- 00:07:19.584 Delete I/O Submission Queue (00h): Supported 00:07:19.584 Create I/O Submission Queue (01h): Supported 00:07:19.584 Get Log Page (02h): Supported 00:07:19.584 Delete I/O Completion Queue (04h): Supported 00:07:19.584 Create I/O Completion Queue (05h): Supported 00:07:19.584 Identify (06h): Supported 00:07:19.584 Abort (08h): Supported 00:07:19.584 Set Features (09h): Supported 00:07:19.584 Get Features (0Ah): Supported 00:07:19.584 Asynchronous Event Request (0Ch): Supported 00:07:19.584 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:19.584 Directive Send (19h): Supported 00:07:19.584 Directive Receive (1Ah): Supported 00:07:19.584 Virtualization Management (1Ch): Supported 00:07:19.584 Doorbell Buffer Config (7Ch): Supported 00:07:19.584 Format NVM (80h): Supported LBA-Change 00:07:19.584 I/O Commands 00:07:19.584 ------------ 00:07:19.584 Flush (00h): Supported LBA-Change 00:07:19.584 Write (01h): Supported LBA-Change 00:07:19.584 Read (02h): Supported 00:07:19.584 Compare (05h): Supported 00:07:19.584 Write Zeroes (08h): Supported LBA-Change 00:07:19.584 Dataset Management (09h): Supported LBA-Change 00:07:19.584 Unknown (0Ch): Supported 00:07:19.584 Unknown (12h): Supported 00:07:19.584 Copy (19h): Supported LBA-Change 00:07:19.584 Unknown (1Dh): Supported LBA-Change 00:07:19.584 00:07:19.584 Error Log 00:07:19.584 ========= 00:07:19.584 00:07:19.584 Arbitration 00:07:19.584 =========== 00:07:19.584 Arbitration Burst: no limit 00:07:19.584 00:07:19.584 Power Management 00:07:19.584 ================ 00:07:19.584 Number of Power States: 1 00:07:19.584 Current Power State: Power State #0 00:07:19.584 Power State #0: 00:07:19.584 Max Power: 25.00 W 00:07:19.584 Non-Operational State: Operational 00:07:19.584 Entry Latency: 16 microseconds 00:07:19.584 Exit Latency: 4 microseconds 00:07:19.584 Relative Read Throughput: 0 00:07:19.584 Relative Read Latency: 0 00:07:19.584 Relative Write Throughput: 0 00:07:19.584 Relative Write Latency: 0 00:07:19.584 Idle Power: Not Reported 00:07:19.584 Active Power: Not Reported 00:07:19.584 Non-Operational Permissive Mode: Not Supported 00:07:19.584 00:07:19.584 Health Information 00:07:19.584 ================== 00:07:19.584 Critical Warnings: 00:07:19.584 Available Spare Space: OK 00:07:19.584 Temperature: OK 00:07:19.584 Device Reliability: OK 00:07:19.584 Read Only: No 00:07:19.584 Volatile Memory Backup: OK 00:07:19.584 Current Temperature: 323 Kelvin (50 Celsius) 00:07:19.584 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:19.584 Available Spare: 0% 00:07:19.584 Available Spare Threshold: 0% 00:07:19.584 Life Percentage Used: 0% 00:07:19.584 Data Units Read: 2230 00:07:19.584 Data Units Written: 2017 00:07:19.584 Host Read Commands: 123200 00:07:19.584 Host Write Commands: 121469 00:07:19.584 Controller Busy Time: 0 minutes 00:07:19.584 Power Cycles: 0 00:07:19.584 Power On Hours: 0 hours 00:07:19.584 Unsafe Shutdowns: 0 00:07:19.584 Unrecoverable Media Errors: 0 00:07:19.584 Lifetime Error Log Entries: 0 00:07:19.584 Warning Temperature Time: 0 minutes 00:07:19.584 Critical Temperature Time: 0 minutes 00:07:19.584 00:07:19.584 Number of Queues 00:07:19.584 ================ 00:07:19.584 Number of I/O Submission Queues: 64 00:07:19.584 Number of I/O Completion Queues: 64 00:07:19.584 00:07:19.584 ZNS Specific Controller Data 00:07:19.584 ============================ 00:07:19.584 Zone Append Size Limit: 0 00:07:19.584 00:07:19.584 00:07:19.584 Active Namespaces 00:07:19.584 ================= 00:07:19.584 Namespace ID:1 00:07:19.584 Error Recovery Timeout: Unlimited 00:07:19.584 Command Set Identifier: NVM (00h) 00:07:19.584 Deallocate: Supported 00:07:19.584 Deallocated/Unwritten Error: Supported 00:07:19.584 Deallocated Read Value: All 0x00 00:07:19.584 Deallocate in Write Zeroes: Not Supported 00:07:19.584 Deallocated Guard Field: 0xFFFF 00:07:19.584 Flush: Supported 00:07:19.584 Reservation: Not Supported 00:07:19.584 Namespace Sharing Capabilities: Private 00:07:19.584 Size (in LBAs): 1048576 (4GiB) 00:07:19.584 Capacity (in LBAs): 1048576 (4GiB) 00:07:19.584 Utilization (in LBAs): 1048576 (4GiB) 00:07:19.584 Thin Provisioning: Not Supported 00:07:19.584 Per-NS Atomic Units: No 00:07:19.584 Maximum Single Source Range Length: 128 00:07:19.584 Maximum Copy Length: 128 00:07:19.584 Maximum Source Range Count: 128 00:07:19.584 NGUID/EUI64 Never Reused: No 00:07:19.584 Namespace Write Protected: No 00:07:19.584 Number of LBA Formats: 8 00:07:19.584 Current LBA Format: LBA Format #04 00:07:19.584 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:19.584 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:19.584 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:19.584 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:19.584 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:19.584 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:19.584 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:19.584 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:19.584 00:07:19.584 NVM Specific Namespace Data 00:07:19.584 =========================== 00:07:19.584 Logical Block Storage Tag Mask: 0 00:07:19.584 Protection Information Capabilities: 00:07:19.584 16b Guard Protection Information Storage Tag Support: No 00:07:19.584 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:19.584 Storage Tag Check Read Support: No 00:07:19.584 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:19.584 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:19.584 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:19.585 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:19.585 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:19.585 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:19.585 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:19.585 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:19.585 Namespace ID:2 00:07:19.585 Error Recovery Timeout: Unlimited 00:07:19.585 Command Set Identifier: NVM (00h) 00:07:19.585 Deallocate: Supported 00:07:19.585 Deallocated/Unwritten Error: Supported 00:07:19.585 Deallocated Read Value: All 0x00 00:07:19.585 Deallocate in Write Zeroes: Not Supported 00:07:19.585 Deallocated Guard Field: 0xFFFF 00:07:19.585 Flush: Supported 00:07:19.585 Reservation: Not Supported 00:07:19.585 Namespace Sharing Capabilities: Private 00:07:19.585 Size (in LBAs): 1048576 (4GiB) 00:07:19.585 Capacity (in LBAs): 1048576 (4GiB) 00:07:19.585 Utilization (in LBAs): 1048576 (4GiB) 00:07:19.585 Thin Provisioning: Not Supported 00:07:19.585 Per-NS Atomic Units: No 00:07:19.585 Maximum Single Source Range Length: 128 00:07:19.585 Maximum Copy Length: 128 00:07:19.585 Maximum Source Range Count: 128 00:07:19.585 NGUID/EUI64 Never Reused: No 00:07:19.585 Namespace Write Protected: No 00:07:19.585 Number of LBA Formats: 8 00:07:19.585 Current LBA Format: LBA Format #04 00:07:19.585 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:19.585 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:19.585 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:19.585 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:19.585 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:19.585 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:19.585 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:19.585 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:19.585 00:07:19.585 NVM Specific Namespace Data 00:07:19.585 =========================== 00:07:19.585 Logical Block Storage Tag Mask: 0 00:07:19.585 Protection Information Capabilities: 00:07:19.585 16b Guard Protection Information Storage Tag Support: No 00:07:19.585 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:19.585 Storage Tag Check Read Support: No 00:07:19.585 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:19.585 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:19.585 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:19.585 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:19.585 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:19.585 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:19.585 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:19.585 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:19.585 Namespace ID:3 00:07:19.585 Error Recovery Timeout: Unlimited 00:07:19.585 Command Set Identifier: NVM (00h) 00:07:19.585 Deallocate: Supported 00:07:19.585 Deallocated/Unwritten Error: Supported 00:07:19.585 Deallocated Read Value: All 0x00 00:07:19.585 Deallocate in Write Zeroes: Not Supported 00:07:19.585 Deallocated Guard Field: 0xFFFF 00:07:19.585 Flush: Supported 00:07:19.585 Reservation: Not Supported 00:07:19.585 Namespace Sharing Capabilities: Private 00:07:19.585 Size (in LBAs): 1048576 (4GiB) 00:07:19.585 Capacity (in LBAs): 1048576 (4GiB) 00:07:19.585 Utilization (in LBAs): 1048576 (4GiB) 00:07:19.585 Thin Provisioning: Not Supported 00:07:19.585 Per-NS Atomic Units: No 00:07:19.585 Maximum Single Source Range Length: 128 00:07:19.585 Maximum Copy Length: 128 00:07:19.585 Maximum Source Range Count: 128 00:07:19.585 NGUID/EUI64 Never Reused: No 00:07:19.585 Namespace Write Protected: No 00:07:19.585 Number of LBA Formats: 8 00:07:19.585 Current LBA Format: LBA Format #04 00:07:19.585 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:19.585 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:19.585 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:19.585 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:19.585 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:19.585 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:19.585 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:19.585 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:19.585 00:07:19.585 NVM Specific Namespace Data 00:07:19.585 =========================== 00:07:19.585 Logical Block Storage Tag Mask: 0 00:07:19.585 Protection Information Capabilities: 00:07:19.585 16b Guard Protection Information Storage Tag Support: No 00:07:19.585 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:19.585 Storage Tag Check Read Support: No 00:07:19.585 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:19.585 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:19.585 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:19.585 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:19.585 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:19.585 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:19.585 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:19.585 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:19.585 20:18:03 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:19.585 20:18:03 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:07:19.843 ===================================================== 00:07:19.843 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:19.843 ===================================================== 00:07:19.843 Controller Capabilities/Features 00:07:19.843 ================================ 00:07:19.843 Vendor ID: 1b36 00:07:19.843 Subsystem Vendor ID: 1af4 00:07:19.843 Serial Number: 12340 00:07:19.843 Model Number: QEMU NVMe Ctrl 00:07:19.843 Firmware Version: 8.0.0 00:07:19.843 Recommended Arb Burst: 6 00:07:19.843 IEEE OUI Identifier: 00 54 52 00:07:19.843 Multi-path I/O 00:07:19.843 May have multiple subsystem ports: No 00:07:19.843 May have multiple controllers: No 00:07:19.843 Associated with SR-IOV VF: No 00:07:19.843 Max Data Transfer Size: 524288 00:07:19.843 Max Number of Namespaces: 256 00:07:19.843 Max Number of I/O Queues: 64 00:07:19.843 NVMe Specification Version (VS): 1.4 00:07:19.843 NVMe Specification Version (Identify): 1.4 00:07:19.843 Maximum Queue Entries: 2048 00:07:19.843 Contiguous Queues Required: Yes 00:07:19.843 Arbitration Mechanisms Supported 00:07:19.843 Weighted Round Robin: Not Supported 00:07:19.843 Vendor Specific: Not Supported 00:07:19.843 Reset Timeout: 7500 ms 00:07:19.843 Doorbell Stride: 4 bytes 00:07:19.843 NVM Subsystem Reset: Not Supported 00:07:19.843 Command Sets Supported 00:07:19.843 NVM Command Set: Supported 00:07:19.843 Boot Partition: Not Supported 00:07:19.843 Memory Page Size Minimum: 4096 bytes 00:07:19.844 Memory Page Size Maximum: 65536 bytes 00:07:19.844 Persistent Memory Region: Not Supported 00:07:19.844 Optional Asynchronous Events Supported 00:07:19.844 Namespace Attribute Notices: Supported 00:07:19.844 Firmware Activation Notices: Not Supported 00:07:19.844 ANA Change Notices: Not Supported 00:07:19.844 PLE Aggregate Log Change Notices: Not Supported 00:07:19.844 LBA Status Info Alert Notices: Not Supported 00:07:19.844 EGE Aggregate Log Change Notices: Not Supported 00:07:19.844 Normal NVM Subsystem Shutdown event: Not Supported 00:07:19.844 Zone Descriptor Change Notices: Not Supported 00:07:19.844 Discovery Log Change Notices: Not Supported 00:07:19.844 Controller Attributes 00:07:19.844 128-bit Host Identifier: Not Supported 00:07:19.844 Non-Operational Permissive Mode: Not Supported 00:07:19.844 NVM Sets: Not Supported 00:07:19.844 Read Recovery Levels: Not Supported 00:07:19.844 Endurance Groups: Not Supported 00:07:19.844 Predictable Latency Mode: Not Supported 00:07:19.844 Traffic Based Keep ALive: Not Supported 00:07:19.844 Namespace Granularity: Not Supported 00:07:19.844 SQ Associations: Not Supported 00:07:19.844 UUID List: Not Supported 00:07:19.844 Multi-Domain Subsystem: Not Supported 00:07:19.844 Fixed Capacity Management: Not Supported 00:07:19.844 Variable Capacity Management: Not Supported 00:07:19.844 Delete Endurance Group: Not Supported 00:07:19.844 Delete NVM Set: Not Supported 00:07:19.844 Extended LBA Formats Supported: Supported 00:07:19.844 Flexible Data Placement Supported: Not Supported 00:07:19.844 00:07:19.844 Controller Memory Buffer Support 00:07:19.844 ================================ 00:07:19.844 Supported: No 00:07:19.844 00:07:19.844 Persistent Memory Region Support 00:07:19.844 ================================ 00:07:19.844 Supported: No 00:07:19.844 00:07:19.844 Admin Command Set Attributes 00:07:19.844 ============================ 00:07:19.844 Security Send/Receive: Not Supported 00:07:19.844 Format NVM: Supported 00:07:19.844 Firmware Activate/Download: Not Supported 00:07:19.844 Namespace Management: Supported 00:07:19.844 Device Self-Test: Not Supported 00:07:19.844 Directives: Supported 00:07:19.844 NVMe-MI: Not Supported 00:07:19.844 Virtualization Management: Not Supported 00:07:19.844 Doorbell Buffer Config: Supported 00:07:19.844 Get LBA Status Capability: Not Supported 00:07:19.844 Command & Feature Lockdown Capability: Not Supported 00:07:19.844 Abort Command Limit: 4 00:07:19.844 Async Event Request Limit: 4 00:07:19.844 Number of Firmware Slots: N/A 00:07:19.844 Firmware Slot 1 Read-Only: N/A 00:07:19.844 Firmware Activation Without Reset: N/A 00:07:19.844 Multiple Update Detection Support: N/A 00:07:19.844 Firmware Update Granularity: No Information Provided 00:07:19.844 Per-Namespace SMART Log: Yes 00:07:19.844 Asymmetric Namespace Access Log Page: Not Supported 00:07:19.844 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:19.844 Command Effects Log Page: Supported 00:07:19.844 Get Log Page Extended Data: Supported 00:07:19.844 Telemetry Log Pages: Not Supported 00:07:19.844 Persistent Event Log Pages: Not Supported 00:07:19.844 Supported Log Pages Log Page: May Support 00:07:19.844 Commands Supported & Effects Log Page: Not Supported 00:07:19.844 Feature Identifiers & Effects Log Page:May Support 00:07:19.844 NVMe-MI Commands & Effects Log Page: May Support 00:07:19.844 Data Area 4 for Telemetry Log: Not Supported 00:07:19.844 Error Log Page Entries Supported: 1 00:07:19.844 Keep Alive: Not Supported 00:07:19.844 00:07:19.844 NVM Command Set Attributes 00:07:19.844 ========================== 00:07:19.844 Submission Queue Entry Size 00:07:19.844 Max: 64 00:07:19.844 Min: 64 00:07:19.844 Completion Queue Entry Size 00:07:19.844 Max: 16 00:07:19.844 Min: 16 00:07:19.844 Number of Namespaces: 256 00:07:19.844 Compare Command: Supported 00:07:19.844 Write Uncorrectable Command: Not Supported 00:07:19.844 Dataset Management Command: Supported 00:07:19.844 Write Zeroes Command: Supported 00:07:19.844 Set Features Save Field: Supported 00:07:19.844 Reservations: Not Supported 00:07:19.844 Timestamp: Supported 00:07:19.844 Copy: Supported 00:07:19.844 Volatile Write Cache: Present 00:07:19.844 Atomic Write Unit (Normal): 1 00:07:19.844 Atomic Write Unit (PFail): 1 00:07:19.844 Atomic Compare & Write Unit: 1 00:07:19.844 Fused Compare & Write: Not Supported 00:07:19.844 Scatter-Gather List 00:07:19.844 SGL Command Set: Supported 00:07:19.844 SGL Keyed: Not Supported 00:07:19.844 SGL Bit Bucket Descriptor: Not Supported 00:07:19.844 SGL Metadata Pointer: Not Supported 00:07:19.844 Oversized SGL: Not Supported 00:07:19.844 SGL Metadata Address: Not Supported 00:07:19.844 SGL Offset: Not Supported 00:07:19.844 Transport SGL Data Block: Not Supported 00:07:19.844 Replay Protected Memory Block: Not Supported 00:07:19.844 00:07:19.844 Firmware Slot Information 00:07:19.844 ========================= 00:07:19.844 Active slot: 1 00:07:19.844 Slot 1 Firmware Revision: 1.0 00:07:19.844 00:07:19.844 00:07:19.844 Commands Supported and Effects 00:07:19.844 ============================== 00:07:19.844 Admin Commands 00:07:19.844 -------------- 00:07:19.844 Delete I/O Submission Queue (00h): Supported 00:07:19.844 Create I/O Submission Queue (01h): Supported 00:07:19.844 Get Log Page (02h): Supported 00:07:19.844 Delete I/O Completion Queue (04h): Supported 00:07:19.844 Create I/O Completion Queue (05h): Supported 00:07:19.844 Identify (06h): Supported 00:07:19.844 Abort (08h): Supported 00:07:19.844 Set Features (09h): Supported 00:07:19.844 Get Features (0Ah): Supported 00:07:19.844 Asynchronous Event Request (0Ch): Supported 00:07:19.844 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:19.844 Directive Send (19h): Supported 00:07:19.844 Directive Receive (1Ah): Supported 00:07:19.844 Virtualization Management (1Ch): Supported 00:07:19.844 Doorbell Buffer Config (7Ch): Supported 00:07:19.844 Format NVM (80h): Supported LBA-Change 00:07:19.844 I/O Commands 00:07:19.844 ------------ 00:07:19.844 Flush (00h): Supported LBA-Change 00:07:19.844 Write (01h): Supported LBA-Change 00:07:19.844 Read (02h): Supported 00:07:19.844 Compare (05h): Supported 00:07:19.844 Write Zeroes (08h): Supported LBA-Change 00:07:19.844 Dataset Management (09h): Supported LBA-Change 00:07:19.844 Unknown (0Ch): Supported 00:07:19.844 Unknown (12h): Supported 00:07:19.844 Copy (19h): Supported LBA-Change 00:07:19.844 Unknown (1Dh): Supported LBA-Change 00:07:19.844 00:07:19.844 Error Log 00:07:19.844 ========= 00:07:19.844 00:07:19.844 Arbitration 00:07:19.844 =========== 00:07:19.844 Arbitration Burst: no limit 00:07:19.844 00:07:19.844 Power Management 00:07:19.844 ================ 00:07:19.844 Number of Power States: 1 00:07:19.844 Current Power State: Power State #0 00:07:19.844 Power State #0: 00:07:19.844 Max Power: 25.00 W 00:07:19.844 Non-Operational State: Operational 00:07:19.844 Entry Latency: 16 microseconds 00:07:19.844 Exit Latency: 4 microseconds 00:07:19.844 Relative Read Throughput: 0 00:07:19.844 Relative Read Latency: 0 00:07:19.844 Relative Write Throughput: 0 00:07:19.844 Relative Write Latency: 0 00:07:19.844 Idle Power: Not Reported 00:07:19.844 Active Power: Not Reported 00:07:19.844 Non-Operational Permissive Mode: Not Supported 00:07:19.844 00:07:19.844 Health Information 00:07:19.844 ================== 00:07:19.844 Critical Warnings: 00:07:19.844 Available Spare Space: OK 00:07:19.844 Temperature: OK 00:07:19.844 Device Reliability: OK 00:07:19.844 Read Only: No 00:07:19.844 Volatile Memory Backup: OK 00:07:19.844 Current Temperature: 323 Kelvin (50 Celsius) 00:07:19.844 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:19.844 Available Spare: 0% 00:07:19.844 Available Spare Threshold: 0% 00:07:19.844 Life Percentage Used: 0% 00:07:19.844 Data Units Read: 667 00:07:19.844 Data Units Written: 595 00:07:19.844 Host Read Commands: 40147 00:07:19.844 Host Write Commands: 39933 00:07:19.844 Controller Busy Time: 0 minutes 00:07:19.844 Power Cycles: 0 00:07:19.844 Power On Hours: 0 hours 00:07:19.844 Unsafe Shutdowns: 0 00:07:19.844 Unrecoverable Media Errors: 0 00:07:19.844 Lifetime Error Log Entries: 0 00:07:19.844 Warning Temperature Time: 0 minutes 00:07:19.844 Critical Temperature Time: 0 minutes 00:07:19.844 00:07:19.844 Number of Queues 00:07:19.844 ================ 00:07:19.844 Number of I/O Submission Queues: 64 00:07:19.844 Number of I/O Completion Queues: 64 00:07:19.844 00:07:19.844 ZNS Specific Controller Data 00:07:19.844 ============================ 00:07:19.844 Zone Append Size Limit: 0 00:07:19.844 00:07:19.844 00:07:19.844 Active Namespaces 00:07:19.844 ================= 00:07:19.844 Namespace ID:1 00:07:19.844 Error Recovery Timeout: Unlimited 00:07:19.844 Command Set Identifier: NVM (00h) 00:07:19.844 Deallocate: Supported 00:07:19.844 Deallocated/Unwritten Error: Supported 00:07:19.844 Deallocated Read Value: All 0x00 00:07:19.844 Deallocate in Write Zeroes: Not Supported 00:07:19.845 Deallocated Guard Field: 0xFFFF 00:07:19.845 Flush: Supported 00:07:19.845 Reservation: Not Supported 00:07:19.845 Metadata Transferred as: Separate Metadata Buffer 00:07:19.845 Namespace Sharing Capabilities: Private 00:07:19.845 Size (in LBAs): 1548666 (5GiB) 00:07:19.845 Capacity (in LBAs): 1548666 (5GiB) 00:07:19.845 Utilization (in LBAs): 1548666 (5GiB) 00:07:19.845 Thin Provisioning: Not Supported 00:07:19.845 Per-NS Atomic Units: No 00:07:19.845 Maximum Single Source Range Length: 128 00:07:19.845 Maximum Copy Length: 128 00:07:19.845 Maximum Source Range Count: 128 00:07:19.845 NGUID/EUI64 Never Reused: No 00:07:19.845 Namespace Write Protected: No 00:07:19.845 Number of LBA Formats: 8 00:07:19.845 Current LBA Format: LBA Format #07 00:07:19.845 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:19.845 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:19.845 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:19.845 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:19.845 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:19.845 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:19.845 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:19.845 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:19.845 00:07:19.845 NVM Specific Namespace Data 00:07:19.845 =========================== 00:07:19.845 Logical Block Storage Tag Mask: 0 00:07:19.845 Protection Information Capabilities: 00:07:19.845 16b Guard Protection Information Storage Tag Support: No 00:07:19.845 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:19.845 Storage Tag Check Read Support: No 00:07:19.845 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:19.845 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:19.845 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:19.845 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:19.845 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:19.845 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:19.845 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:19.845 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:19.845 20:18:03 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:19.845 20:18:03 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:07:20.104 ===================================================== 00:07:20.104 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:20.104 ===================================================== 00:07:20.104 Controller Capabilities/Features 00:07:20.104 ================================ 00:07:20.104 Vendor ID: 1b36 00:07:20.104 Subsystem Vendor ID: 1af4 00:07:20.104 Serial Number: 12341 00:07:20.104 Model Number: QEMU NVMe Ctrl 00:07:20.104 Firmware Version: 8.0.0 00:07:20.104 Recommended Arb Burst: 6 00:07:20.104 IEEE OUI Identifier: 00 54 52 00:07:20.104 Multi-path I/O 00:07:20.104 May have multiple subsystem ports: No 00:07:20.104 May have multiple controllers: No 00:07:20.105 Associated with SR-IOV VF: No 00:07:20.105 Max Data Transfer Size: 524288 00:07:20.105 Max Number of Namespaces: 256 00:07:20.105 Max Number of I/O Queues: 64 00:07:20.105 NVMe Specification Version (VS): 1.4 00:07:20.105 NVMe Specification Version (Identify): 1.4 00:07:20.105 Maximum Queue Entries: 2048 00:07:20.105 Contiguous Queues Required: Yes 00:07:20.105 Arbitration Mechanisms Supported 00:07:20.105 Weighted Round Robin: Not Supported 00:07:20.105 Vendor Specific: Not Supported 00:07:20.105 Reset Timeout: 7500 ms 00:07:20.105 Doorbell Stride: 4 bytes 00:07:20.105 NVM Subsystem Reset: Not Supported 00:07:20.105 Command Sets Supported 00:07:20.105 NVM Command Set: Supported 00:07:20.105 Boot Partition: Not Supported 00:07:20.105 Memory Page Size Minimum: 4096 bytes 00:07:20.105 Memory Page Size Maximum: 65536 bytes 00:07:20.105 Persistent Memory Region: Not Supported 00:07:20.105 Optional Asynchronous Events Supported 00:07:20.105 Namespace Attribute Notices: Supported 00:07:20.105 Firmware Activation Notices: Not Supported 00:07:20.105 ANA Change Notices: Not Supported 00:07:20.105 PLE Aggregate Log Change Notices: Not Supported 00:07:20.105 LBA Status Info Alert Notices: Not Supported 00:07:20.105 EGE Aggregate Log Change Notices: Not Supported 00:07:20.105 Normal NVM Subsystem Shutdown event: Not Supported 00:07:20.105 Zone Descriptor Change Notices: Not Supported 00:07:20.105 Discovery Log Change Notices: Not Supported 00:07:20.105 Controller Attributes 00:07:20.105 128-bit Host Identifier: Not Supported 00:07:20.105 Non-Operational Permissive Mode: Not Supported 00:07:20.105 NVM Sets: Not Supported 00:07:20.105 Read Recovery Levels: Not Supported 00:07:20.105 Endurance Groups: Not Supported 00:07:20.105 Predictable Latency Mode: Not Supported 00:07:20.105 Traffic Based Keep ALive: Not Supported 00:07:20.105 Namespace Granularity: Not Supported 00:07:20.105 SQ Associations: Not Supported 00:07:20.105 UUID List: Not Supported 00:07:20.105 Multi-Domain Subsystem: Not Supported 00:07:20.105 Fixed Capacity Management: Not Supported 00:07:20.105 Variable Capacity Management: Not Supported 00:07:20.105 Delete Endurance Group: Not Supported 00:07:20.105 Delete NVM Set: Not Supported 00:07:20.105 Extended LBA Formats Supported: Supported 00:07:20.105 Flexible Data Placement Supported: Not Supported 00:07:20.105 00:07:20.105 Controller Memory Buffer Support 00:07:20.105 ================================ 00:07:20.105 Supported: No 00:07:20.105 00:07:20.105 Persistent Memory Region Support 00:07:20.105 ================================ 00:07:20.105 Supported: No 00:07:20.105 00:07:20.105 Admin Command Set Attributes 00:07:20.105 ============================ 00:07:20.105 Security Send/Receive: Not Supported 00:07:20.105 Format NVM: Supported 00:07:20.105 Firmware Activate/Download: Not Supported 00:07:20.105 Namespace Management: Supported 00:07:20.105 Device Self-Test: Not Supported 00:07:20.105 Directives: Supported 00:07:20.105 NVMe-MI: Not Supported 00:07:20.105 Virtualization Management: Not Supported 00:07:20.105 Doorbell Buffer Config: Supported 00:07:20.105 Get LBA Status Capability: Not Supported 00:07:20.105 Command & Feature Lockdown Capability: Not Supported 00:07:20.105 Abort Command Limit: 4 00:07:20.105 Async Event Request Limit: 4 00:07:20.105 Number of Firmware Slots: N/A 00:07:20.105 Firmware Slot 1 Read-Only: N/A 00:07:20.105 Firmware Activation Without Reset: N/A 00:07:20.105 Multiple Update Detection Support: N/A 00:07:20.105 Firmware Update Granularity: No Information Provided 00:07:20.105 Per-Namespace SMART Log: Yes 00:07:20.105 Asymmetric Namespace Access Log Page: Not Supported 00:07:20.105 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:20.105 Command Effects Log Page: Supported 00:07:20.105 Get Log Page Extended Data: Supported 00:07:20.105 Telemetry Log Pages: Not Supported 00:07:20.105 Persistent Event Log Pages: Not Supported 00:07:20.105 Supported Log Pages Log Page: May Support 00:07:20.105 Commands Supported & Effects Log Page: Not Supported 00:07:20.105 Feature Identifiers & Effects Log Page:May Support 00:07:20.105 NVMe-MI Commands & Effects Log Page: May Support 00:07:20.105 Data Area 4 for Telemetry Log: Not Supported 00:07:20.105 Error Log Page Entries Supported: 1 00:07:20.105 Keep Alive: Not Supported 00:07:20.105 00:07:20.105 NVM Command Set Attributes 00:07:20.105 ========================== 00:07:20.105 Submission Queue Entry Size 00:07:20.105 Max: 64 00:07:20.105 Min: 64 00:07:20.105 Completion Queue Entry Size 00:07:20.105 Max: 16 00:07:20.105 Min: 16 00:07:20.105 Number of Namespaces: 256 00:07:20.105 Compare Command: Supported 00:07:20.105 Write Uncorrectable Command: Not Supported 00:07:20.105 Dataset Management Command: Supported 00:07:20.105 Write Zeroes Command: Supported 00:07:20.105 Set Features Save Field: Supported 00:07:20.105 Reservations: Not Supported 00:07:20.105 Timestamp: Supported 00:07:20.105 Copy: Supported 00:07:20.105 Volatile Write Cache: Present 00:07:20.105 Atomic Write Unit (Normal): 1 00:07:20.105 Atomic Write Unit (PFail): 1 00:07:20.105 Atomic Compare & Write Unit: 1 00:07:20.105 Fused Compare & Write: Not Supported 00:07:20.105 Scatter-Gather List 00:07:20.105 SGL Command Set: Supported 00:07:20.105 SGL Keyed: Not Supported 00:07:20.105 SGL Bit Bucket Descriptor: Not Supported 00:07:20.105 SGL Metadata Pointer: Not Supported 00:07:20.105 Oversized SGL: Not Supported 00:07:20.105 SGL Metadata Address: Not Supported 00:07:20.105 SGL Offset: Not Supported 00:07:20.105 Transport SGL Data Block: Not Supported 00:07:20.105 Replay Protected Memory Block: Not Supported 00:07:20.105 00:07:20.105 Firmware Slot Information 00:07:20.105 ========================= 00:07:20.105 Active slot: 1 00:07:20.105 Slot 1 Firmware Revision: 1.0 00:07:20.105 00:07:20.105 00:07:20.105 Commands Supported and Effects 00:07:20.105 ============================== 00:07:20.105 Admin Commands 00:07:20.105 -------------- 00:07:20.105 Delete I/O Submission Queue (00h): Supported 00:07:20.105 Create I/O Submission Queue (01h): Supported 00:07:20.105 Get Log Page (02h): Supported 00:07:20.105 Delete I/O Completion Queue (04h): Supported 00:07:20.105 Create I/O Completion Queue (05h): Supported 00:07:20.105 Identify (06h): Supported 00:07:20.105 Abort (08h): Supported 00:07:20.105 Set Features (09h): Supported 00:07:20.105 Get Features (0Ah): Supported 00:07:20.105 Asynchronous Event Request (0Ch): Supported 00:07:20.105 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:20.105 Directive Send (19h): Supported 00:07:20.105 Directive Receive (1Ah): Supported 00:07:20.105 Virtualization Management (1Ch): Supported 00:07:20.105 Doorbell Buffer Config (7Ch): Supported 00:07:20.105 Format NVM (80h): Supported LBA-Change 00:07:20.105 I/O Commands 00:07:20.105 ------------ 00:07:20.105 Flush (00h): Supported LBA-Change 00:07:20.105 Write (01h): Supported LBA-Change 00:07:20.105 Read (02h): Supported 00:07:20.105 Compare (05h): Supported 00:07:20.105 Write Zeroes (08h): Supported LBA-Change 00:07:20.105 Dataset Management (09h): Supported LBA-Change 00:07:20.105 Unknown (0Ch): Supported 00:07:20.105 Unknown (12h): Supported 00:07:20.105 Copy (19h): Supported LBA-Change 00:07:20.105 Unknown (1Dh): Supported LBA-Change 00:07:20.105 00:07:20.105 Error Log 00:07:20.105 ========= 00:07:20.105 00:07:20.105 Arbitration 00:07:20.105 =========== 00:07:20.105 Arbitration Burst: no limit 00:07:20.105 00:07:20.105 Power Management 00:07:20.105 ================ 00:07:20.105 Number of Power States: 1 00:07:20.105 Current Power State: Power State #0 00:07:20.105 Power State #0: 00:07:20.105 Max Power: 25.00 W 00:07:20.105 Non-Operational State: Operational 00:07:20.105 Entry Latency: 16 microseconds 00:07:20.105 Exit Latency: 4 microseconds 00:07:20.105 Relative Read Throughput: 0 00:07:20.105 Relative Read Latency: 0 00:07:20.105 Relative Write Throughput: 0 00:07:20.105 Relative Write Latency: 0 00:07:20.105 Idle Power: Not Reported 00:07:20.105 Active Power: Not Reported 00:07:20.105 Non-Operational Permissive Mode: Not Supported 00:07:20.105 00:07:20.105 Health Information 00:07:20.105 ================== 00:07:20.105 Critical Warnings: 00:07:20.105 Available Spare Space: OK 00:07:20.105 Temperature: OK 00:07:20.105 Device Reliability: OK 00:07:20.105 Read Only: No 00:07:20.105 Volatile Memory Backup: OK 00:07:20.105 Current Temperature: 323 Kelvin (50 Celsius) 00:07:20.105 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:20.105 Available Spare: 0% 00:07:20.105 Available Spare Threshold: 0% 00:07:20.105 Life Percentage Used: 0% 00:07:20.105 Data Units Read: 1018 00:07:20.105 Data Units Written: 891 00:07:20.105 Host Read Commands: 58659 00:07:20.105 Host Write Commands: 57553 00:07:20.105 Controller Busy Time: 0 minutes 00:07:20.105 Power Cycles: 0 00:07:20.105 Power On Hours: 0 hours 00:07:20.105 Unsafe Shutdowns: 0 00:07:20.105 Unrecoverable Media Errors: 0 00:07:20.105 Lifetime Error Log Entries: 0 00:07:20.105 Warning Temperature Time: 0 minutes 00:07:20.105 Critical Temperature Time: 0 minutes 00:07:20.106 00:07:20.106 Number of Queues 00:07:20.106 ================ 00:07:20.106 Number of I/O Submission Queues: 64 00:07:20.106 Number of I/O Completion Queues: 64 00:07:20.106 00:07:20.106 ZNS Specific Controller Data 00:07:20.106 ============================ 00:07:20.106 Zone Append Size Limit: 0 00:07:20.106 00:07:20.106 00:07:20.106 Active Namespaces 00:07:20.106 ================= 00:07:20.106 Namespace ID:1 00:07:20.106 Error Recovery Timeout: Unlimited 00:07:20.106 Command Set Identifier: NVM (00h) 00:07:20.106 Deallocate: Supported 00:07:20.106 Deallocated/Unwritten Error: Supported 00:07:20.106 Deallocated Read Value: All 0x00 00:07:20.106 Deallocate in Write Zeroes: Not Supported 00:07:20.106 Deallocated Guard Field: 0xFFFF 00:07:20.106 Flush: Supported 00:07:20.106 Reservation: Not Supported 00:07:20.106 Namespace Sharing Capabilities: Private 00:07:20.106 Size (in LBAs): 1310720 (5GiB) 00:07:20.106 Capacity (in LBAs): 1310720 (5GiB) 00:07:20.106 Utilization (in LBAs): 1310720 (5GiB) 00:07:20.106 Thin Provisioning: Not Supported 00:07:20.106 Per-NS Atomic Units: No 00:07:20.106 Maximum Single Source Range Length: 128 00:07:20.106 Maximum Copy Length: 128 00:07:20.106 Maximum Source Range Count: 128 00:07:20.106 NGUID/EUI64 Never Reused: No 00:07:20.106 Namespace Write Protected: No 00:07:20.106 Number of LBA Formats: 8 00:07:20.106 Current LBA Format: LBA Format #04 00:07:20.106 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:20.106 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:20.106 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:20.106 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:20.106 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:20.106 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:20.106 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:20.106 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:20.106 00:07:20.106 NVM Specific Namespace Data 00:07:20.106 =========================== 00:07:20.106 Logical Block Storage Tag Mask: 0 00:07:20.106 Protection Information Capabilities: 00:07:20.106 16b Guard Protection Information Storage Tag Support: No 00:07:20.106 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:20.106 Storage Tag Check Read Support: No 00:07:20.106 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:20.106 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:20.106 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:20.106 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:20.106 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:20.106 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:20.106 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:20.106 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:20.106 20:18:04 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:20.106 20:18:04 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:07:20.106 ===================================================== 00:07:20.106 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:20.106 ===================================================== 00:07:20.106 Controller Capabilities/Features 00:07:20.106 ================================ 00:07:20.106 Vendor ID: 1b36 00:07:20.106 Subsystem Vendor ID: 1af4 00:07:20.106 Serial Number: 12342 00:07:20.106 Model Number: QEMU NVMe Ctrl 00:07:20.106 Firmware Version: 8.0.0 00:07:20.106 Recommended Arb Burst: 6 00:07:20.106 IEEE OUI Identifier: 00 54 52 00:07:20.106 Multi-path I/O 00:07:20.106 May have multiple subsystem ports: No 00:07:20.106 May have multiple controllers: No 00:07:20.106 Associated with SR-IOV VF: No 00:07:20.106 Max Data Transfer Size: 524288 00:07:20.106 Max Number of Namespaces: 256 00:07:20.106 Max Number of I/O Queues: 64 00:07:20.106 NVMe Specification Version (VS): 1.4 00:07:20.106 NVMe Specification Version (Identify): 1.4 00:07:20.106 Maximum Queue Entries: 2048 00:07:20.106 Contiguous Queues Required: Yes 00:07:20.106 Arbitration Mechanisms Supported 00:07:20.106 Weighted Round Robin: Not Supported 00:07:20.106 Vendor Specific: Not Supported 00:07:20.106 Reset Timeout: 7500 ms 00:07:20.106 Doorbell Stride: 4 bytes 00:07:20.106 NVM Subsystem Reset: Not Supported 00:07:20.106 Command Sets Supported 00:07:20.106 NVM Command Set: Supported 00:07:20.106 Boot Partition: Not Supported 00:07:20.106 Memory Page Size Minimum: 4096 bytes 00:07:20.106 Memory Page Size Maximum: 65536 bytes 00:07:20.106 Persistent Memory Region: Not Supported 00:07:20.106 Optional Asynchronous Events Supported 00:07:20.106 Namespace Attribute Notices: Supported 00:07:20.106 Firmware Activation Notices: Not Supported 00:07:20.106 ANA Change Notices: Not Supported 00:07:20.106 PLE Aggregate Log Change Notices: Not Supported 00:07:20.106 LBA Status Info Alert Notices: Not Supported 00:07:20.106 EGE Aggregate Log Change Notices: Not Supported 00:07:20.106 Normal NVM Subsystem Shutdown event: Not Supported 00:07:20.106 Zone Descriptor Change Notices: Not Supported 00:07:20.106 Discovery Log Change Notices: Not Supported 00:07:20.106 Controller Attributes 00:07:20.106 128-bit Host Identifier: Not Supported 00:07:20.106 Non-Operational Permissive Mode: Not Supported 00:07:20.106 NVM Sets: Not Supported 00:07:20.106 Read Recovery Levels: Not Supported 00:07:20.106 Endurance Groups: Not Supported 00:07:20.106 Predictable Latency Mode: Not Supported 00:07:20.106 Traffic Based Keep ALive: Not Supported 00:07:20.106 Namespace Granularity: Not Supported 00:07:20.106 SQ Associations: Not Supported 00:07:20.106 UUID List: Not Supported 00:07:20.106 Multi-Domain Subsystem: Not Supported 00:07:20.106 Fixed Capacity Management: Not Supported 00:07:20.106 Variable Capacity Management: Not Supported 00:07:20.106 Delete Endurance Group: Not Supported 00:07:20.106 Delete NVM Set: Not Supported 00:07:20.106 Extended LBA Formats Supported: Supported 00:07:20.106 Flexible Data Placement Supported: Not Supported 00:07:20.106 00:07:20.106 Controller Memory Buffer Support 00:07:20.106 ================================ 00:07:20.106 Supported: No 00:07:20.106 00:07:20.106 Persistent Memory Region Support 00:07:20.106 ================================ 00:07:20.106 Supported: No 00:07:20.106 00:07:20.106 Admin Command Set Attributes 00:07:20.106 ============================ 00:07:20.106 Security Send/Receive: Not Supported 00:07:20.106 Format NVM: Supported 00:07:20.106 Firmware Activate/Download: Not Supported 00:07:20.106 Namespace Management: Supported 00:07:20.106 Device Self-Test: Not Supported 00:07:20.106 Directives: Supported 00:07:20.106 NVMe-MI: Not Supported 00:07:20.106 Virtualization Management: Not Supported 00:07:20.106 Doorbell Buffer Config: Supported 00:07:20.106 Get LBA Status Capability: Not Supported 00:07:20.106 Command & Feature Lockdown Capability: Not Supported 00:07:20.106 Abort Command Limit: 4 00:07:20.106 Async Event Request Limit: 4 00:07:20.106 Number of Firmware Slots: N/A 00:07:20.106 Firmware Slot 1 Read-Only: N/A 00:07:20.106 Firmware Activation Without Reset: N/A 00:07:20.106 Multiple Update Detection Support: N/A 00:07:20.106 Firmware Update Granularity: No Information Provided 00:07:20.106 Per-Namespace SMART Log: Yes 00:07:20.106 Asymmetric Namespace Access Log Page: Not Supported 00:07:20.106 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:20.106 Command Effects Log Page: Supported 00:07:20.106 Get Log Page Extended Data: Supported 00:07:20.106 Telemetry Log Pages: Not Supported 00:07:20.106 Persistent Event Log Pages: Not Supported 00:07:20.106 Supported Log Pages Log Page: May Support 00:07:20.106 Commands Supported & Effects Log Page: Not Supported 00:07:20.106 Feature Identifiers & Effects Log Page:May Support 00:07:20.106 NVMe-MI Commands & Effects Log Page: May Support 00:07:20.106 Data Area 4 for Telemetry Log: Not Supported 00:07:20.106 Error Log Page Entries Supported: 1 00:07:20.106 Keep Alive: Not Supported 00:07:20.106 00:07:20.106 NVM Command Set Attributes 00:07:20.106 ========================== 00:07:20.106 Submission Queue Entry Size 00:07:20.106 Max: 64 00:07:20.106 Min: 64 00:07:20.106 Completion Queue Entry Size 00:07:20.106 Max: 16 00:07:20.106 Min: 16 00:07:20.106 Number of Namespaces: 256 00:07:20.106 Compare Command: Supported 00:07:20.106 Write Uncorrectable Command: Not Supported 00:07:20.106 Dataset Management Command: Supported 00:07:20.106 Write Zeroes Command: Supported 00:07:20.106 Set Features Save Field: Supported 00:07:20.106 Reservations: Not Supported 00:07:20.106 Timestamp: Supported 00:07:20.106 Copy: Supported 00:07:20.106 Volatile Write Cache: Present 00:07:20.106 Atomic Write Unit (Normal): 1 00:07:20.106 Atomic Write Unit (PFail): 1 00:07:20.106 Atomic Compare & Write Unit: 1 00:07:20.106 Fused Compare & Write: Not Supported 00:07:20.106 Scatter-Gather List 00:07:20.106 SGL Command Set: Supported 00:07:20.106 SGL Keyed: Not Supported 00:07:20.106 SGL Bit Bucket Descriptor: Not Supported 00:07:20.106 SGL Metadata Pointer: Not Supported 00:07:20.106 Oversized SGL: Not Supported 00:07:20.106 SGL Metadata Address: Not Supported 00:07:20.106 SGL Offset: Not Supported 00:07:20.107 Transport SGL Data Block: Not Supported 00:07:20.107 Replay Protected Memory Block: Not Supported 00:07:20.107 00:07:20.107 Firmware Slot Information 00:07:20.107 ========================= 00:07:20.107 Active slot: 1 00:07:20.107 Slot 1 Firmware Revision: 1.0 00:07:20.107 00:07:20.107 00:07:20.107 Commands Supported and Effects 00:07:20.107 ============================== 00:07:20.107 Admin Commands 00:07:20.107 -------------- 00:07:20.107 Delete I/O Submission Queue (00h): Supported 00:07:20.107 Create I/O Submission Queue (01h): Supported 00:07:20.107 Get Log Page (02h): Supported 00:07:20.107 Delete I/O Completion Queue (04h): Supported 00:07:20.107 Create I/O Completion Queue (05h): Supported 00:07:20.107 Identify (06h): Supported 00:07:20.107 Abort (08h): Supported 00:07:20.107 Set Features (09h): Supported 00:07:20.107 Get Features (0Ah): Supported 00:07:20.107 Asynchronous Event Request (0Ch): Supported 00:07:20.107 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:20.107 Directive Send (19h): Supported 00:07:20.107 Directive Receive (1Ah): Supported 00:07:20.107 Virtualization Management (1Ch): Supported 00:07:20.107 Doorbell Buffer Config (7Ch): Supported 00:07:20.107 Format NVM (80h): Supported LBA-Change 00:07:20.107 I/O Commands 00:07:20.107 ------------ 00:07:20.107 Flush (00h): Supported LBA-Change 00:07:20.107 Write (01h): Supported LBA-Change 00:07:20.107 Read (02h): Supported 00:07:20.107 Compare (05h): Supported 00:07:20.107 Write Zeroes (08h): Supported LBA-Change 00:07:20.107 Dataset Management (09h): Supported LBA-Change 00:07:20.107 Unknown (0Ch): Supported 00:07:20.107 Unknown (12h): Supported 00:07:20.107 Copy (19h): Supported LBA-Change 00:07:20.107 Unknown (1Dh): Supported LBA-Change 00:07:20.107 00:07:20.107 Error Log 00:07:20.107 ========= 00:07:20.107 00:07:20.107 Arbitration 00:07:20.107 =========== 00:07:20.107 Arbitration Burst: no limit 00:07:20.107 00:07:20.107 Power Management 00:07:20.107 ================ 00:07:20.107 Number of Power States: 1 00:07:20.107 Current Power State: Power State #0 00:07:20.107 Power State #0: 00:07:20.107 Max Power: 25.00 W 00:07:20.107 Non-Operational State: Operational 00:07:20.107 Entry Latency: 16 microseconds 00:07:20.107 Exit Latency: 4 microseconds 00:07:20.107 Relative Read Throughput: 0 00:07:20.107 Relative Read Latency: 0 00:07:20.107 Relative Write Throughput: 0 00:07:20.107 Relative Write Latency: 0 00:07:20.107 Idle Power: Not Reported 00:07:20.107 Active Power: Not Reported 00:07:20.107 Non-Operational Permissive Mode: Not Supported 00:07:20.107 00:07:20.107 Health Information 00:07:20.107 ================== 00:07:20.107 Critical Warnings: 00:07:20.107 Available Spare Space: OK 00:07:20.107 Temperature: OK 00:07:20.107 Device Reliability: OK 00:07:20.107 Read Only: No 00:07:20.107 Volatile Memory Backup: OK 00:07:20.107 Current Temperature: 323 Kelvin (50 Celsius) 00:07:20.107 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:20.107 Available Spare: 0% 00:07:20.107 Available Spare Threshold: 0% 00:07:20.107 Life Percentage Used: 0% 00:07:20.107 Data Units Read: 2230 00:07:20.107 Data Units Written: 2017 00:07:20.107 Host Read Commands: 123200 00:07:20.107 Host Write Commands: 121469 00:07:20.107 Controller Busy Time: 0 minutes 00:07:20.107 Power Cycles: 0 00:07:20.107 Power On Hours: 0 hours 00:07:20.107 Unsafe Shutdowns: 0 00:07:20.107 Unrecoverable Media Errors: 0 00:07:20.107 Lifetime Error Log Entries: 0 00:07:20.107 Warning Temperature Time: 0 minutes 00:07:20.107 Critical Temperature Time: 0 minutes 00:07:20.107 00:07:20.107 Number of Queues 00:07:20.107 ================ 00:07:20.107 Number of I/O Submission Queues: 64 00:07:20.107 Number of I/O Completion Queues: 64 00:07:20.107 00:07:20.107 ZNS Specific Controller Data 00:07:20.107 ============================ 00:07:20.107 Zone Append Size Limit: 0 00:07:20.107 00:07:20.107 00:07:20.107 Active Namespaces 00:07:20.107 ================= 00:07:20.107 Namespace ID:1 00:07:20.107 Error Recovery Timeout: Unlimited 00:07:20.107 Command Set Identifier: NVM (00h) 00:07:20.107 Deallocate: Supported 00:07:20.107 Deallocated/Unwritten Error: Supported 00:07:20.107 Deallocated Read Value: All 0x00 00:07:20.107 Deallocate in Write Zeroes: Not Supported 00:07:20.107 Deallocated Guard Field: 0xFFFF 00:07:20.107 Flush: Supported 00:07:20.107 Reservation: Not Supported 00:07:20.107 Namespace Sharing Capabilities: Private 00:07:20.107 Size (in LBAs): 1048576 (4GiB) 00:07:20.107 Capacity (in LBAs): 1048576 (4GiB) 00:07:20.107 Utilization (in LBAs): 1048576 (4GiB) 00:07:20.107 Thin Provisioning: Not Supported 00:07:20.107 Per-NS Atomic Units: No 00:07:20.107 Maximum Single Source Range Length: 128 00:07:20.107 Maximum Copy Length: 128 00:07:20.107 Maximum Source Range Count: 128 00:07:20.107 NGUID/EUI64 Never Reused: No 00:07:20.107 Namespace Write Protected: No 00:07:20.107 Number of LBA Formats: 8 00:07:20.107 Current LBA Format: LBA Format #04 00:07:20.107 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:20.107 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:20.107 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:20.107 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:20.107 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:20.107 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:20.107 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:20.107 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:20.107 00:07:20.107 NVM Specific Namespace Data 00:07:20.107 =========================== 00:07:20.107 Logical Block Storage Tag Mask: 0 00:07:20.107 Protection Information Capabilities: 00:07:20.107 16b Guard Protection Information Storage Tag Support: No 00:07:20.107 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:20.107 Storage Tag Check Read Support: No 00:07:20.107 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:20.107 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:20.107 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:20.107 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:20.107 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:20.107 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:20.107 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:20.107 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:20.107 Namespace ID:2 00:07:20.107 Error Recovery Timeout: Unlimited 00:07:20.107 Command Set Identifier: NVM (00h) 00:07:20.107 Deallocate: Supported 00:07:20.107 Deallocated/Unwritten Error: Supported 00:07:20.107 Deallocated Read Value: All 0x00 00:07:20.107 Deallocate in Write Zeroes: Not Supported 00:07:20.107 Deallocated Guard Field: 0xFFFF 00:07:20.107 Flush: Supported 00:07:20.107 Reservation: Not Supported 00:07:20.107 Namespace Sharing Capabilities: Private 00:07:20.107 Size (in LBAs): 1048576 (4GiB) 00:07:20.107 Capacity (in LBAs): 1048576 (4GiB) 00:07:20.107 Utilization (in LBAs): 1048576 (4GiB) 00:07:20.107 Thin Provisioning: Not Supported 00:07:20.107 Per-NS Atomic Units: No 00:07:20.107 Maximum Single Source Range Length: 128 00:07:20.107 Maximum Copy Length: 128 00:07:20.107 Maximum Source Range Count: 128 00:07:20.107 NGUID/EUI64 Never Reused: No 00:07:20.107 Namespace Write Protected: No 00:07:20.107 Number of LBA Formats: 8 00:07:20.107 Current LBA Format: LBA Format #04 00:07:20.107 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:20.107 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:20.107 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:20.107 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:20.107 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:20.107 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:20.107 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:20.107 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:20.107 00:07:20.107 NVM Specific Namespace Data 00:07:20.107 =========================== 00:07:20.107 Logical Block Storage Tag Mask: 0 00:07:20.107 Protection Information Capabilities: 00:07:20.107 16b Guard Protection Information Storage Tag Support: No 00:07:20.107 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:20.107 Storage Tag Check Read Support: No 00:07:20.107 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:20.107 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:20.107 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:20.107 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:20.107 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:20.107 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:20.107 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:20.107 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:20.107 Namespace ID:3 00:07:20.107 Error Recovery Timeout: Unlimited 00:07:20.107 Command Set Identifier: NVM (00h) 00:07:20.107 Deallocate: Supported 00:07:20.107 Deallocated/Unwritten Error: Supported 00:07:20.107 Deallocated Read Value: All 0x00 00:07:20.107 Deallocate in Write Zeroes: Not Supported 00:07:20.107 Deallocated Guard Field: 0xFFFF 00:07:20.108 Flush: Supported 00:07:20.108 Reservation: Not Supported 00:07:20.108 Namespace Sharing Capabilities: Private 00:07:20.108 Size (in LBAs): 1048576 (4GiB) 00:07:20.108 Capacity (in LBAs): 1048576 (4GiB) 00:07:20.108 Utilization (in LBAs): 1048576 (4GiB) 00:07:20.108 Thin Provisioning: Not Supported 00:07:20.108 Per-NS Atomic Units: No 00:07:20.108 Maximum Single Source Range Length: 128 00:07:20.108 Maximum Copy Length: 128 00:07:20.108 Maximum Source Range Count: 128 00:07:20.108 NGUID/EUI64 Never Reused: No 00:07:20.108 Namespace Write Protected: No 00:07:20.108 Number of LBA Formats: 8 00:07:20.108 Current LBA Format: LBA Format #04 00:07:20.108 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:20.108 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:20.108 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:20.108 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:20.108 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:20.108 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:20.108 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:20.108 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:20.108 00:07:20.108 NVM Specific Namespace Data 00:07:20.108 =========================== 00:07:20.108 Logical Block Storage Tag Mask: 0 00:07:20.108 Protection Information Capabilities: 00:07:20.108 16b Guard Protection Information Storage Tag Support: No 00:07:20.108 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:20.108 Storage Tag Check Read Support: No 00:07:20.108 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:20.108 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:20.108 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:20.108 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:20.108 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:20.108 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:20.108 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:20.108 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:20.366 20:18:04 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:20.366 20:18:04 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:07:20.366 ===================================================== 00:07:20.366 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:20.366 ===================================================== 00:07:20.366 Controller Capabilities/Features 00:07:20.366 ================================ 00:07:20.366 Vendor ID: 1b36 00:07:20.366 Subsystem Vendor ID: 1af4 00:07:20.366 Serial Number: 12343 00:07:20.366 Model Number: QEMU NVMe Ctrl 00:07:20.366 Firmware Version: 8.0.0 00:07:20.366 Recommended Arb Burst: 6 00:07:20.366 IEEE OUI Identifier: 00 54 52 00:07:20.366 Multi-path I/O 00:07:20.366 May have multiple subsystem ports: No 00:07:20.366 May have multiple controllers: Yes 00:07:20.366 Associated with SR-IOV VF: No 00:07:20.366 Max Data Transfer Size: 524288 00:07:20.366 Max Number of Namespaces: 256 00:07:20.366 Max Number of I/O Queues: 64 00:07:20.366 NVMe Specification Version (VS): 1.4 00:07:20.366 NVMe Specification Version (Identify): 1.4 00:07:20.366 Maximum Queue Entries: 2048 00:07:20.366 Contiguous Queues Required: Yes 00:07:20.366 Arbitration Mechanisms Supported 00:07:20.366 Weighted Round Robin: Not Supported 00:07:20.366 Vendor Specific: Not Supported 00:07:20.367 Reset Timeout: 7500 ms 00:07:20.367 Doorbell Stride: 4 bytes 00:07:20.367 NVM Subsystem Reset: Not Supported 00:07:20.367 Command Sets Supported 00:07:20.367 NVM Command Set: Supported 00:07:20.367 Boot Partition: Not Supported 00:07:20.367 Memory Page Size Minimum: 4096 bytes 00:07:20.367 Memory Page Size Maximum: 65536 bytes 00:07:20.367 Persistent Memory Region: Not Supported 00:07:20.367 Optional Asynchronous Events Supported 00:07:20.367 Namespace Attribute Notices: Supported 00:07:20.367 Firmware Activation Notices: Not Supported 00:07:20.367 ANA Change Notices: Not Supported 00:07:20.367 PLE Aggregate Log Change Notices: Not Supported 00:07:20.367 LBA Status Info Alert Notices: Not Supported 00:07:20.367 EGE Aggregate Log Change Notices: Not Supported 00:07:20.367 Normal NVM Subsystem Shutdown event: Not Supported 00:07:20.367 Zone Descriptor Change Notices: Not Supported 00:07:20.367 Discovery Log Change Notices: Not Supported 00:07:20.367 Controller Attributes 00:07:20.367 128-bit Host Identifier: Not Supported 00:07:20.367 Non-Operational Permissive Mode: Not Supported 00:07:20.367 NVM Sets: Not Supported 00:07:20.367 Read Recovery Levels: Not Supported 00:07:20.367 Endurance Groups: Supported 00:07:20.367 Predictable Latency Mode: Not Supported 00:07:20.367 Traffic Based Keep ALive: Not Supported 00:07:20.367 Namespace Granularity: Not Supported 00:07:20.367 SQ Associations: Not Supported 00:07:20.367 UUID List: Not Supported 00:07:20.367 Multi-Domain Subsystem: Not Supported 00:07:20.367 Fixed Capacity Management: Not Supported 00:07:20.367 Variable Capacity Management: Not Supported 00:07:20.367 Delete Endurance Group: Not Supported 00:07:20.367 Delete NVM Set: Not Supported 00:07:20.367 Extended LBA Formats Supported: Supported 00:07:20.367 Flexible Data Placement Supported: Supported 00:07:20.367 00:07:20.367 Controller Memory Buffer Support 00:07:20.367 ================================ 00:07:20.367 Supported: No 00:07:20.367 00:07:20.367 Persistent Memory Region Support 00:07:20.367 ================================ 00:07:20.367 Supported: No 00:07:20.367 00:07:20.367 Admin Command Set Attributes 00:07:20.367 ============================ 00:07:20.367 Security Send/Receive: Not Supported 00:07:20.367 Format NVM: Supported 00:07:20.367 Firmware Activate/Download: Not Supported 00:07:20.367 Namespace Management: Supported 00:07:20.367 Device Self-Test: Not Supported 00:07:20.367 Directives: Supported 00:07:20.367 NVMe-MI: Not Supported 00:07:20.367 Virtualization Management: Not Supported 00:07:20.367 Doorbell Buffer Config: Supported 00:07:20.367 Get LBA Status Capability: Not Supported 00:07:20.367 Command & Feature Lockdown Capability: Not Supported 00:07:20.367 Abort Command Limit: 4 00:07:20.367 Async Event Request Limit: 4 00:07:20.367 Number of Firmware Slots: N/A 00:07:20.367 Firmware Slot 1 Read-Only: N/A 00:07:20.367 Firmware Activation Without Reset: N/A 00:07:20.367 Multiple Update Detection Support: N/A 00:07:20.367 Firmware Update Granularity: No Information Provided 00:07:20.367 Per-Namespace SMART Log: Yes 00:07:20.367 Asymmetric Namespace Access Log Page: Not Supported 00:07:20.367 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:20.367 Command Effects Log Page: Supported 00:07:20.367 Get Log Page Extended Data: Supported 00:07:20.367 Telemetry Log Pages: Not Supported 00:07:20.367 Persistent Event Log Pages: Not Supported 00:07:20.367 Supported Log Pages Log Page: May Support 00:07:20.367 Commands Supported & Effects Log Page: Not Supported 00:07:20.367 Feature Identifiers & Effects Log Page:May Support 00:07:20.367 NVMe-MI Commands & Effects Log Page: May Support 00:07:20.367 Data Area 4 for Telemetry Log: Not Supported 00:07:20.367 Error Log Page Entries Supported: 1 00:07:20.367 Keep Alive: Not Supported 00:07:20.367 00:07:20.367 NVM Command Set Attributes 00:07:20.367 ========================== 00:07:20.367 Submission Queue Entry Size 00:07:20.367 Max: 64 00:07:20.367 Min: 64 00:07:20.367 Completion Queue Entry Size 00:07:20.367 Max: 16 00:07:20.367 Min: 16 00:07:20.367 Number of Namespaces: 256 00:07:20.367 Compare Command: Supported 00:07:20.367 Write Uncorrectable Command: Not Supported 00:07:20.367 Dataset Management Command: Supported 00:07:20.367 Write Zeroes Command: Supported 00:07:20.367 Set Features Save Field: Supported 00:07:20.367 Reservations: Not Supported 00:07:20.367 Timestamp: Supported 00:07:20.367 Copy: Supported 00:07:20.367 Volatile Write Cache: Present 00:07:20.367 Atomic Write Unit (Normal): 1 00:07:20.367 Atomic Write Unit (PFail): 1 00:07:20.367 Atomic Compare & Write Unit: 1 00:07:20.367 Fused Compare & Write: Not Supported 00:07:20.367 Scatter-Gather List 00:07:20.367 SGL Command Set: Supported 00:07:20.367 SGL Keyed: Not Supported 00:07:20.367 SGL Bit Bucket Descriptor: Not Supported 00:07:20.367 SGL Metadata Pointer: Not Supported 00:07:20.367 Oversized SGL: Not Supported 00:07:20.367 SGL Metadata Address: Not Supported 00:07:20.367 SGL Offset: Not Supported 00:07:20.367 Transport SGL Data Block: Not Supported 00:07:20.367 Replay Protected Memory Block: Not Supported 00:07:20.367 00:07:20.367 Firmware Slot Information 00:07:20.367 ========================= 00:07:20.367 Active slot: 1 00:07:20.367 Slot 1 Firmware Revision: 1.0 00:07:20.367 00:07:20.367 00:07:20.367 Commands Supported and Effects 00:07:20.367 ============================== 00:07:20.367 Admin Commands 00:07:20.367 -------------- 00:07:20.367 Delete I/O Submission Queue (00h): Supported 00:07:20.367 Create I/O Submission Queue (01h): Supported 00:07:20.367 Get Log Page (02h): Supported 00:07:20.367 Delete I/O Completion Queue (04h): Supported 00:07:20.367 Create I/O Completion Queue (05h): Supported 00:07:20.367 Identify (06h): Supported 00:07:20.367 Abort (08h): Supported 00:07:20.367 Set Features (09h): Supported 00:07:20.367 Get Features (0Ah): Supported 00:07:20.367 Asynchronous Event Request (0Ch): Supported 00:07:20.367 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:20.367 Directive Send (19h): Supported 00:07:20.367 Directive Receive (1Ah): Supported 00:07:20.367 Virtualization Management (1Ch): Supported 00:07:20.367 Doorbell Buffer Config (7Ch): Supported 00:07:20.367 Format NVM (80h): Supported LBA-Change 00:07:20.367 I/O Commands 00:07:20.367 ------------ 00:07:20.367 Flush (00h): Supported LBA-Change 00:07:20.367 Write (01h): Supported LBA-Change 00:07:20.367 Read (02h): Supported 00:07:20.367 Compare (05h): Supported 00:07:20.367 Write Zeroes (08h): Supported LBA-Change 00:07:20.367 Dataset Management (09h): Supported LBA-Change 00:07:20.367 Unknown (0Ch): Supported 00:07:20.367 Unknown (12h): Supported 00:07:20.367 Copy (19h): Supported LBA-Change 00:07:20.367 Unknown (1Dh): Supported LBA-Change 00:07:20.367 00:07:20.367 Error Log 00:07:20.367 ========= 00:07:20.367 00:07:20.367 Arbitration 00:07:20.367 =========== 00:07:20.367 Arbitration Burst: no limit 00:07:20.367 00:07:20.367 Power Management 00:07:20.367 ================ 00:07:20.367 Number of Power States: 1 00:07:20.367 Current Power State: Power State #0 00:07:20.367 Power State #0: 00:07:20.367 Max Power: 25.00 W 00:07:20.367 Non-Operational State: Operational 00:07:20.367 Entry Latency: 16 microseconds 00:07:20.367 Exit Latency: 4 microseconds 00:07:20.367 Relative Read Throughput: 0 00:07:20.367 Relative Read Latency: 0 00:07:20.367 Relative Write Throughput: 0 00:07:20.367 Relative Write Latency: 0 00:07:20.367 Idle Power: Not Reported 00:07:20.367 Active Power: Not Reported 00:07:20.367 Non-Operational Permissive Mode: Not Supported 00:07:20.367 00:07:20.367 Health Information 00:07:20.367 ================== 00:07:20.367 Critical Warnings: 00:07:20.367 Available Spare Space: OK 00:07:20.367 Temperature: OK 00:07:20.367 Device Reliability: OK 00:07:20.367 Read Only: No 00:07:20.367 Volatile Memory Backup: OK 00:07:20.367 Current Temperature: 323 Kelvin (50 Celsius) 00:07:20.367 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:20.367 Available Spare: 0% 00:07:20.367 Available Spare Threshold: 0% 00:07:20.367 Life Percentage Used: 0% 00:07:20.367 Data Units Read: 959 00:07:20.367 Data Units Written: 888 00:07:20.367 Host Read Commands: 42906 00:07:20.367 Host Write Commands: 42329 00:07:20.367 Controller Busy Time: 0 minutes 00:07:20.367 Power Cycles: 0 00:07:20.367 Power On Hours: 0 hours 00:07:20.367 Unsafe Shutdowns: 0 00:07:20.367 Unrecoverable Media Errors: 0 00:07:20.367 Lifetime Error Log Entries: 0 00:07:20.367 Warning Temperature Time: 0 minutes 00:07:20.367 Critical Temperature Time: 0 minutes 00:07:20.367 00:07:20.367 Number of Queues 00:07:20.367 ================ 00:07:20.367 Number of I/O Submission Queues: 64 00:07:20.367 Number of I/O Completion Queues: 64 00:07:20.367 00:07:20.367 ZNS Specific Controller Data 00:07:20.367 ============================ 00:07:20.367 Zone Append Size Limit: 0 00:07:20.367 00:07:20.367 00:07:20.367 Active Namespaces 00:07:20.367 ================= 00:07:20.367 Namespace ID:1 00:07:20.367 Error Recovery Timeout: Unlimited 00:07:20.368 Command Set Identifier: NVM (00h) 00:07:20.368 Deallocate: Supported 00:07:20.368 Deallocated/Unwritten Error: Supported 00:07:20.368 Deallocated Read Value: All 0x00 00:07:20.368 Deallocate in Write Zeroes: Not Supported 00:07:20.368 Deallocated Guard Field: 0xFFFF 00:07:20.368 Flush: Supported 00:07:20.368 Reservation: Not Supported 00:07:20.368 Namespace Sharing Capabilities: Multiple Controllers 00:07:20.368 Size (in LBAs): 262144 (1GiB) 00:07:20.368 Capacity (in LBAs): 262144 (1GiB) 00:07:20.368 Utilization (in LBAs): 262144 (1GiB) 00:07:20.368 Thin Provisioning: Not Supported 00:07:20.368 Per-NS Atomic Units: No 00:07:20.368 Maximum Single Source Range Length: 128 00:07:20.368 Maximum Copy Length: 128 00:07:20.368 Maximum Source Range Count: 128 00:07:20.368 NGUID/EUI64 Never Reused: No 00:07:20.368 Namespace Write Protected: No 00:07:20.368 Endurance group ID: 1 00:07:20.368 Number of LBA Formats: 8 00:07:20.368 Current LBA Format: LBA Format #04 00:07:20.368 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:20.368 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:20.368 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:20.368 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:20.368 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:20.368 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:20.368 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:20.368 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:20.368 00:07:20.368 Get Feature FDP: 00:07:20.368 ================ 00:07:20.368 Enabled: Yes 00:07:20.368 FDP configuration index: 0 00:07:20.368 00:07:20.368 FDP configurations log page 00:07:20.368 =========================== 00:07:20.368 Number of FDP configurations: 1 00:07:20.368 Version: 0 00:07:20.368 Size: 112 00:07:20.368 FDP Configuration Descriptor: 0 00:07:20.368 Descriptor Size: 96 00:07:20.368 Reclaim Group Identifier format: 2 00:07:20.368 FDP Volatile Write Cache: Not Present 00:07:20.368 FDP Configuration: Valid 00:07:20.368 Vendor Specific Size: 0 00:07:20.368 Number of Reclaim Groups: 2 00:07:20.368 Number of Recalim Unit Handles: 8 00:07:20.368 Max Placement Identifiers: 128 00:07:20.368 Number of Namespaces Suppprted: 256 00:07:20.368 Reclaim unit Nominal Size: 6000000 bytes 00:07:20.368 Estimated Reclaim Unit Time Limit: Not Reported 00:07:20.368 RUH Desc #000: RUH Type: Initially Isolated 00:07:20.368 RUH Desc #001: RUH Type: Initially Isolated 00:07:20.368 RUH Desc #002: RUH Type: Initially Isolated 00:07:20.368 RUH Desc #003: RUH Type: Initially Isolated 00:07:20.368 RUH Desc #004: RUH Type: Initially Isolated 00:07:20.368 RUH Desc #005: RUH Type: Initially Isolated 00:07:20.368 RUH Desc #006: RUH Type: Initially Isolated 00:07:20.368 RUH Desc #007: RUH Type: Initially Isolated 00:07:20.368 00:07:20.368 FDP reclaim unit handle usage log page 00:07:20.368 ====================================== 00:07:20.368 Number of Reclaim Unit Handles: 8 00:07:20.368 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:20.368 RUH Usage Desc #001: RUH Attributes: Unused 00:07:20.368 RUH Usage Desc #002: RUH Attributes: Unused 00:07:20.368 RUH Usage Desc #003: RUH Attributes: Unused 00:07:20.368 RUH Usage Desc #004: RUH Attributes: Unused 00:07:20.368 RUH Usage Desc #005: RUH Attributes: Unused 00:07:20.368 RUH Usage Desc #006: RUH Attributes: Unused 00:07:20.368 RUH Usage Desc #007: RUH Attributes: Unused 00:07:20.368 00:07:20.368 FDP statistics log page 00:07:20.368 ======================= 00:07:20.368 Host bytes with metadata written: 546742272 00:07:20.368 Media bytes with metadata written: 546799616 00:07:20.368 Media bytes erased: 0 00:07:20.368 00:07:20.368 FDP events log page 00:07:20.368 =================== 00:07:20.368 Number of FDP events: 0 00:07:20.368 00:07:20.368 NVM Specific Namespace Data 00:07:20.368 =========================== 00:07:20.368 Logical Block Storage Tag Mask: 0 00:07:20.368 Protection Information Capabilities: 00:07:20.368 16b Guard Protection Information Storage Tag Support: No 00:07:20.368 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:20.368 Storage Tag Check Read Support: No 00:07:20.368 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:20.368 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:20.368 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:20.368 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:20.368 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:20.368 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:20.368 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:20.368 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:20.368 00:07:20.368 real 0m1.180s 00:07:20.368 user 0m0.432s 00:07:20.368 sys 0m0.545s 00:07:20.368 20:18:04 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:20.368 20:18:04 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:07:20.368 ************************************ 00:07:20.368 END TEST nvme_identify 00:07:20.368 ************************************ 00:07:20.368 20:18:04 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:07:20.368 20:18:04 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:20.368 20:18:04 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:20.368 20:18:04 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:20.368 ************************************ 00:07:20.368 START TEST nvme_perf 00:07:20.368 ************************************ 00:07:20.368 20:18:04 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:07:20.368 20:18:04 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:07:21.753 Initializing NVMe Controllers 00:07:21.753 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:21.753 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:21.753 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:21.753 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:21.753 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:21.753 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:21.753 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:21.753 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:21.753 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:21.753 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:21.753 Initialization complete. Launching workers. 00:07:21.753 ======================================================== 00:07:21.753 Latency(us) 00:07:21.753 Device Information : IOPS MiB/s Average min max 00:07:21.753 PCIE (0000:00:13.0) NSID 1 from core 0: 9067.61 106.26 14136.87 5836.60 29178.53 00:07:21.753 PCIE (0000:00:10.0) NSID 1 from core 0: 9067.61 106.26 14115.48 5638.60 27829.03 00:07:21.753 PCIE (0000:00:11.0) NSID 1 from core 0: 9067.61 106.26 14095.08 5921.72 26248.41 00:07:21.753 PCIE (0000:00:12.0) NSID 1 from core 0: 9067.61 106.26 14073.61 5855.40 24835.50 00:07:21.753 PCIE (0000:00:12.0) NSID 2 from core 0: 9067.61 106.26 14051.85 5885.62 23249.07 00:07:21.753 PCIE (0000:00:12.0) NSID 3 from core 0: 9067.61 106.26 14030.42 5826.72 21625.80 00:07:21.753 ======================================================== 00:07:21.753 Total : 54405.64 637.57 14083.88 5638.60 29178.53 00:07:21.753 00:07:21.753 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:21.753 ================================================================================= 00:07:21.753 1.00000% : 6200.714us 00:07:21.753 10.00000% : 11494.006us 00:07:21.753 25.00000% : 13006.375us 00:07:21.753 50.00000% : 14216.271us 00:07:21.753 75.00000% : 15627.815us 00:07:21.753 90.00000% : 16938.535us 00:07:21.753 95.00000% : 17845.957us 00:07:21.753 98.00000% : 19257.502us 00:07:21.753 99.00000% : 21374.818us 00:07:21.753 99.50000% : 28029.243us 00:07:21.753 99.90000% : 29037.489us 00:07:21.753 99.99000% : 29239.138us 00:07:21.753 99.99900% : 29239.138us 00:07:21.753 99.99990% : 29239.138us 00:07:21.753 99.99999% : 29239.138us 00:07:21.753 00:07:21.753 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:21.753 ================================================================================= 00:07:21.753 1.00000% : 6150.302us 00:07:21.753 10.00000% : 11645.243us 00:07:21.753 25.00000% : 12905.551us 00:07:21.753 50.00000% : 14216.271us 00:07:21.753 75.00000% : 15627.815us 00:07:21.753 90.00000% : 17140.185us 00:07:21.753 95.00000% : 17745.132us 00:07:21.753 98.00000% : 19257.502us 00:07:21.753 99.00000% : 21072.345us 00:07:21.753 99.50000% : 26617.698us 00:07:21.753 99.90000% : 27625.945us 00:07:21.753 99.99000% : 28029.243us 00:07:21.753 99.99900% : 28029.243us 00:07:21.753 99.99990% : 28029.243us 00:07:21.753 99.99999% : 28029.243us 00:07:21.753 00:07:21.753 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:21.753 ================================================================================= 00:07:21.753 1.00000% : 6225.920us 00:07:21.753 10.00000% : 11645.243us 00:07:21.753 25.00000% : 12905.551us 00:07:21.753 50.00000% : 14317.095us 00:07:21.753 75.00000% : 15526.991us 00:07:21.753 90.00000% : 17039.360us 00:07:21.753 95.00000% : 17644.308us 00:07:21.753 98.00000% : 19257.502us 00:07:21.753 99.00000% : 20467.397us 00:07:21.753 99.50000% : 25105.329us 00:07:21.753 99.90000% : 26012.751us 00:07:21.753 99.99000% : 26416.049us 00:07:21.753 99.99900% : 26416.049us 00:07:21.753 99.99990% : 26416.049us 00:07:21.753 99.99999% : 26416.049us 00:07:21.753 00:07:21.753 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:21.753 ================================================================================= 00:07:21.753 1.00000% : 6200.714us 00:07:21.753 10.00000% : 11393.182us 00:07:21.753 25.00000% : 13107.200us 00:07:21.753 50.00000% : 14216.271us 00:07:21.753 75.00000% : 15627.815us 00:07:21.753 90.00000% : 16938.535us 00:07:21.753 95.00000% : 17644.308us 00:07:21.753 98.00000% : 18955.028us 00:07:21.753 99.00000% : 19459.151us 00:07:21.753 99.50000% : 23693.785us 00:07:21.753 99.90000% : 24601.206us 00:07:21.753 99.99000% : 24903.680us 00:07:21.753 99.99900% : 24903.680us 00:07:21.753 99.99990% : 24903.680us 00:07:21.753 99.99999% : 24903.680us 00:07:21.753 00:07:21.753 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:21.753 ================================================================================= 00:07:21.753 1.00000% : 6200.714us 00:07:21.753 10.00000% : 11494.006us 00:07:21.753 25.00000% : 13006.375us 00:07:21.753 50.00000% : 14216.271us 00:07:21.753 75.00000% : 15627.815us 00:07:21.753 90.00000% : 16837.711us 00:07:21.754 95.00000% : 17644.308us 00:07:21.754 98.00000% : 18753.378us 00:07:21.754 99.00000% : 19559.975us 00:07:21.754 99.50000% : 22080.591us 00:07:21.754 99.90000% : 23088.837us 00:07:21.754 99.99000% : 23290.486us 00:07:21.754 99.99900% : 23290.486us 00:07:21.754 99.99990% : 23290.486us 00:07:21.754 99.99999% : 23290.486us 00:07:21.754 00:07:21.754 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:21.754 ================================================================================= 00:07:21.754 1.00000% : 6200.714us 00:07:21.754 10.00000% : 11443.594us 00:07:21.754 25.00000% : 13006.375us 00:07:21.754 50.00000% : 14216.271us 00:07:21.754 75.00000% : 15627.815us 00:07:21.754 90.00000% : 16837.711us 00:07:21.754 95.00000% : 17745.132us 00:07:21.754 98.00000% : 18753.378us 00:07:21.754 99.00000% : 19862.449us 00:07:21.754 99.50000% : 20467.397us 00:07:21.754 99.90000% : 21475.643us 00:07:21.754 99.99000% : 21677.292us 00:07:21.754 99.99900% : 21677.292us 00:07:21.754 99.99990% : 21677.292us 00:07:21.754 99.99999% : 21677.292us 00:07:21.754 00:07:21.754 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:21.754 ============================================================================== 00:07:21.754 Range in us Cumulative IO count 00:07:21.754 5822.622 - 5847.828: 0.0220% ( 2) 00:07:21.754 5847.828 - 5873.034: 0.1320% ( 10) 00:07:21.754 5873.034 - 5898.240: 0.1871% ( 5) 00:07:21.754 5898.240 - 5923.446: 0.2091% ( 2) 00:07:21.754 5923.446 - 5948.652: 0.2311% ( 2) 00:07:21.754 5948.652 - 5973.858: 0.3741% ( 13) 00:07:21.754 5973.858 - 5999.065: 0.4291% ( 5) 00:07:21.754 5999.065 - 6024.271: 0.5062% ( 7) 00:07:21.754 6024.271 - 6049.477: 0.5502% ( 4) 00:07:21.754 6049.477 - 6074.683: 0.5832% ( 3) 00:07:21.754 6074.683 - 6099.889: 0.6382% ( 5) 00:07:21.754 6099.889 - 6125.095: 0.7042% ( 6) 00:07:21.754 6125.095 - 6150.302: 0.7923% ( 8) 00:07:21.754 6150.302 - 6175.508: 0.9463% ( 14) 00:07:21.754 6175.508 - 6200.714: 1.0783% ( 12) 00:07:21.754 6200.714 - 6225.920: 1.1774% ( 9) 00:07:21.754 6225.920 - 6251.126: 1.2764% ( 9) 00:07:21.754 6251.126 - 6276.332: 1.3534% ( 7) 00:07:21.754 6276.332 - 6301.538: 1.4195% ( 6) 00:07:21.754 6301.538 - 6326.745: 1.4855% ( 6) 00:07:21.754 6326.745 - 6351.951: 1.5735% ( 8) 00:07:21.754 6351.951 - 6377.157: 1.6505% ( 7) 00:07:21.754 6377.157 - 6402.363: 1.7276% ( 7) 00:07:21.754 6402.363 - 6427.569: 1.8156% ( 8) 00:07:21.754 6427.569 - 6452.775: 1.9036% ( 8) 00:07:21.754 6452.775 - 6503.188: 2.1017% ( 18) 00:07:21.754 6503.188 - 6553.600: 2.3107% ( 19) 00:07:21.754 6553.600 - 6604.012: 2.4758% ( 15) 00:07:21.754 6604.012 - 6654.425: 2.6298% ( 14) 00:07:21.754 6654.425 - 6704.837: 2.7949% ( 15) 00:07:21.754 6704.837 - 6755.249: 2.9159% ( 11) 00:07:21.754 6755.249 - 6805.662: 2.9930% ( 7) 00:07:21.754 6805.662 - 6856.074: 3.0700% ( 7) 00:07:21.754 6856.074 - 6906.486: 3.1360% ( 6) 00:07:21.754 6906.486 - 6956.898: 3.1690% ( 3) 00:07:21.754 6956.898 - 7007.311: 3.1910% ( 2) 00:07:21.754 7007.311 - 7057.723: 3.2240% ( 3) 00:07:21.754 7057.723 - 7108.135: 3.2570% ( 3) 00:07:21.754 7108.135 - 7158.548: 3.2901% ( 3) 00:07:21.754 7158.548 - 7208.960: 3.3231% ( 3) 00:07:21.754 7208.960 - 7259.372: 3.3561% ( 3) 00:07:21.754 7259.372 - 7309.785: 3.3891% ( 3) 00:07:21.754 7309.785 - 7360.197: 3.4221% ( 3) 00:07:21.754 7360.197 - 7410.609: 3.4551% ( 3) 00:07:21.754 7410.609 - 7461.022: 3.4991% ( 4) 00:07:21.754 7461.022 - 7511.434: 3.5871% ( 8) 00:07:21.754 7511.434 - 7561.846: 3.6532% ( 6) 00:07:21.754 7561.846 - 7612.258: 3.7192% ( 6) 00:07:21.754 7612.258 - 7662.671: 3.7742% ( 5) 00:07:21.754 7662.671 - 7713.083: 3.8402% ( 6) 00:07:21.754 7713.083 - 7763.495: 3.9283% ( 8) 00:07:21.754 7763.495 - 7813.908: 4.0273% ( 9) 00:07:21.754 7813.908 - 7864.320: 4.0933% ( 6) 00:07:21.754 7864.320 - 7914.732: 4.1483% ( 5) 00:07:21.754 7914.732 - 7965.145: 4.2254% ( 7) 00:07:21.754 7965.145 - 8015.557: 4.2914% ( 6) 00:07:21.754 8015.557 - 8065.969: 4.3684% ( 7) 00:07:21.754 8065.969 - 8116.382: 4.4454% ( 7) 00:07:21.754 8116.382 - 8166.794: 4.5114% ( 6) 00:07:21.754 8166.794 - 8217.206: 4.5885% ( 7) 00:07:21.754 8217.206 - 8267.618: 4.6545% ( 6) 00:07:21.754 8267.618 - 8318.031: 4.7205% ( 6) 00:07:21.754 8318.031 - 8368.443: 4.7865% ( 6) 00:07:21.754 8368.443 - 8418.855: 4.8526% ( 6) 00:07:21.754 8418.855 - 8469.268: 4.9076% ( 5) 00:07:21.754 8469.268 - 8519.680: 4.9296% ( 2) 00:07:21.754 8620.505 - 8670.917: 4.9516% ( 2) 00:07:21.754 8670.917 - 8721.329: 5.0286% ( 7) 00:07:21.754 8721.329 - 8771.742: 5.0506% ( 2) 00:07:21.754 8771.742 - 8822.154: 5.0946% ( 4) 00:07:21.754 8822.154 - 8872.566: 5.1386% ( 4) 00:07:21.754 8872.566 - 8922.978: 5.1827% ( 4) 00:07:21.754 8922.978 - 8973.391: 5.2267% ( 4) 00:07:21.754 8973.391 - 9023.803: 5.2597% ( 3) 00:07:21.754 9023.803 - 9074.215: 5.3037% ( 4) 00:07:21.754 9074.215 - 9124.628: 5.3477% ( 4) 00:07:21.754 9124.628 - 9175.040: 5.3917% ( 4) 00:07:21.754 9175.040 - 9225.452: 5.4357% ( 4) 00:07:21.754 9225.452 - 9275.865: 5.4798% ( 4) 00:07:21.754 9275.865 - 9326.277: 5.5898% ( 10) 00:07:21.754 9326.277 - 9376.689: 5.6448% ( 5) 00:07:21.754 9376.689 - 9427.102: 5.7218% ( 7) 00:07:21.754 9427.102 - 9477.514: 5.7879% ( 6) 00:07:21.754 9477.514 - 9527.926: 5.8319% ( 4) 00:07:21.754 9527.926 - 9578.338: 5.8649% ( 3) 00:07:21.754 9578.338 - 9628.751: 5.8979% ( 3) 00:07:21.754 9628.751 - 9679.163: 5.9419% ( 4) 00:07:21.754 9679.163 - 9729.575: 6.0189% ( 7) 00:07:21.754 9729.575 - 9779.988: 6.0849% ( 6) 00:07:21.754 9779.988 - 9830.400: 6.1840% ( 9) 00:07:21.754 9830.400 - 9880.812: 6.3050% ( 11) 00:07:21.754 9880.812 - 9931.225: 6.4040% ( 9) 00:07:21.754 9931.225 - 9981.637: 6.5251% ( 11) 00:07:21.754 9981.637 - 10032.049: 6.6131% ( 8) 00:07:21.754 10032.049 - 10082.462: 6.6901% ( 7) 00:07:21.754 10082.462 - 10132.874: 6.7892% ( 9) 00:07:21.754 10132.874 - 10183.286: 6.8882% ( 9) 00:07:21.754 10183.286 - 10233.698: 6.9872% ( 9) 00:07:21.754 10233.698 - 10284.111: 7.0643% ( 7) 00:07:21.754 10284.111 - 10334.523: 7.1083% ( 4) 00:07:21.754 10334.523 - 10384.935: 7.1743% ( 6) 00:07:21.754 10384.935 - 10435.348: 7.2403% ( 6) 00:07:21.754 10435.348 - 10485.760: 7.3063% ( 6) 00:07:21.754 10485.760 - 10536.172: 7.3614% ( 5) 00:07:21.754 10536.172 - 10586.585: 7.4274% ( 6) 00:07:21.754 10586.585 - 10636.997: 7.5044% ( 7) 00:07:21.754 10636.997 - 10687.409: 7.6474% ( 13) 00:07:21.754 10687.409 - 10737.822: 7.7355% ( 8) 00:07:21.754 10737.822 - 10788.234: 7.8015% ( 6) 00:07:21.754 10788.234 - 10838.646: 7.9115% ( 10) 00:07:21.754 10838.646 - 10889.058: 8.0216% ( 10) 00:07:21.754 10889.058 - 10939.471: 8.1646% ( 13) 00:07:21.754 10939.471 - 10989.883: 8.2967% ( 12) 00:07:21.754 10989.883 - 11040.295: 8.4067% ( 10) 00:07:21.754 11040.295 - 11090.708: 8.5607% ( 14) 00:07:21.754 11090.708 - 11141.120: 8.7368% ( 16) 00:07:21.754 11141.120 - 11191.532: 8.9239% ( 17) 00:07:21.754 11191.532 - 11241.945: 9.1329% ( 19) 00:07:21.754 11241.945 - 11292.357: 9.3530% ( 20) 00:07:21.754 11292.357 - 11342.769: 9.5401% ( 17) 00:07:21.754 11342.769 - 11393.182: 9.7051% ( 15) 00:07:21.754 11393.182 - 11443.594: 9.8812% ( 16) 00:07:21.754 11443.594 - 11494.006: 10.1012% ( 20) 00:07:21.754 11494.006 - 11544.418: 10.3653% ( 24) 00:07:21.754 11544.418 - 11594.831: 10.6514% ( 26) 00:07:21.754 11594.831 - 11645.243: 11.0035% ( 32) 00:07:21.754 11645.243 - 11695.655: 11.3006% ( 27) 00:07:21.754 11695.655 - 11746.068: 11.5757% ( 25) 00:07:21.754 11746.068 - 11796.480: 11.8398% ( 24) 00:07:21.754 11796.480 - 11846.892: 12.1479% ( 28) 00:07:21.754 11846.892 - 11897.305: 12.4010% ( 23) 00:07:21.754 11897.305 - 11947.717: 12.6210% ( 20) 00:07:21.754 11947.717 - 11998.129: 12.9181% ( 27) 00:07:21.754 11998.129 - 12048.542: 13.2372% ( 29) 00:07:21.754 12048.542 - 12098.954: 13.7544% ( 47) 00:07:21.754 12098.954 - 12149.366: 14.2055% ( 41) 00:07:21.754 12149.366 - 12199.778: 14.6457% ( 40) 00:07:21.754 12199.778 - 12250.191: 15.0748% ( 39) 00:07:21.754 12250.191 - 12300.603: 15.5040% ( 39) 00:07:21.754 12300.603 - 12351.015: 16.0211% ( 47) 00:07:21.754 12351.015 - 12401.428: 16.6263% ( 55) 00:07:21.754 12401.428 - 12451.840: 17.3526% ( 66) 00:07:21.754 12451.840 - 12502.252: 17.9467% ( 54) 00:07:21.754 12502.252 - 12552.665: 18.6180% ( 61) 00:07:21.754 12552.665 - 12603.077: 19.2782% ( 60) 00:07:21.754 12603.077 - 12653.489: 20.0374% ( 69) 00:07:21.754 12653.489 - 12703.902: 20.8077% ( 70) 00:07:21.754 12703.902 - 12754.314: 21.6329% ( 75) 00:07:21.754 12754.314 - 12804.726: 22.4142% ( 71) 00:07:21.754 12804.726 - 12855.138: 23.2724% ( 78) 00:07:21.754 12855.138 - 12905.551: 24.3068% ( 94) 00:07:21.754 12905.551 - 13006.375: 26.2654% ( 178) 00:07:21.754 13006.375 - 13107.200: 28.1140% ( 168) 00:07:21.754 13107.200 - 13208.025: 29.8856% ( 161) 00:07:21.754 13208.025 - 13308.849: 31.9872% ( 191) 00:07:21.754 13308.849 - 13409.674: 34.0999% ( 192) 00:07:21.754 13409.674 - 13510.498: 36.2786% ( 198) 00:07:21.754 13510.498 - 13611.323: 38.4903% ( 201) 00:07:21.754 13611.323 - 13712.148: 40.8451% ( 214) 00:07:21.754 13712.148 - 13812.972: 42.9137% ( 188) 00:07:21.754 13812.972 - 13913.797: 44.7733% ( 169) 00:07:21.754 13913.797 - 14014.622: 46.6879% ( 174) 00:07:21.754 14014.622 - 14115.446: 48.5255% ( 167) 00:07:21.754 14115.446 - 14216.271: 50.2861% ( 160) 00:07:21.754 14216.271 - 14317.095: 52.1897% ( 173) 00:07:21.754 14317.095 - 14417.920: 53.9833% ( 163) 00:07:21.754 14417.920 - 14518.745: 55.7328% ( 159) 00:07:21.754 14518.745 - 14619.569: 57.9115% ( 198) 00:07:21.754 14619.569 - 14720.394: 59.9802% ( 188) 00:07:21.754 14720.394 - 14821.218: 61.9388% ( 178) 00:07:21.754 14821.218 - 14922.043: 63.9965% ( 187) 00:07:21.754 14922.043 - 15022.868: 65.7901% ( 163) 00:07:21.755 15022.868 - 15123.692: 67.7487% ( 178) 00:07:21.755 15123.692 - 15224.517: 69.6413% ( 172) 00:07:21.755 15224.517 - 15325.342: 71.5999% ( 178) 00:07:21.755 15325.342 - 15426.166: 73.3825% ( 162) 00:07:21.755 15426.166 - 15526.991: 74.7909% ( 128) 00:07:21.755 15526.991 - 15627.815: 75.9133% ( 102) 00:07:21.755 15627.815 - 15728.640: 77.0357% ( 102) 00:07:21.755 15728.640 - 15829.465: 78.1690% ( 103) 00:07:21.755 15829.465 - 15930.289: 79.4014% ( 112) 00:07:21.755 15930.289 - 16031.114: 80.5898% ( 108) 00:07:21.755 16031.114 - 16131.938: 81.5911% ( 91) 00:07:21.755 16131.938 - 16232.763: 82.6695% ( 98) 00:07:21.755 16232.763 - 16333.588: 83.9899% ( 120) 00:07:21.755 16333.588 - 16434.412: 85.1893% ( 109) 00:07:21.755 16434.412 - 16535.237: 86.6087% ( 129) 00:07:21.755 16535.237 - 16636.062: 87.8081% ( 109) 00:07:21.755 16636.062 - 16736.886: 88.8534% ( 95) 00:07:21.755 16736.886 - 16837.711: 89.7887% ( 85) 00:07:21.755 16837.711 - 16938.535: 90.5480% ( 69) 00:07:21.755 16938.535 - 17039.360: 91.1752% ( 57) 00:07:21.755 17039.360 - 17140.185: 91.7914% ( 56) 00:07:21.755 17140.185 - 17241.009: 92.4186% ( 57) 00:07:21.755 17241.009 - 17341.834: 92.9688% ( 50) 00:07:21.755 17341.834 - 17442.658: 93.4969% ( 48) 00:07:21.755 17442.658 - 17543.483: 93.9481% ( 41) 00:07:21.755 17543.483 - 17644.308: 94.3992% ( 41) 00:07:21.755 17644.308 - 17745.132: 94.7623% ( 33) 00:07:21.755 17745.132 - 17845.957: 95.1695% ( 37) 00:07:21.755 17845.957 - 17946.782: 95.5656% ( 36) 00:07:21.755 17946.782 - 18047.606: 96.0167% ( 41) 00:07:21.755 18047.606 - 18148.431: 96.4459% ( 39) 00:07:21.755 18148.431 - 18249.255: 96.8530% ( 37) 00:07:21.755 18249.255 - 18350.080: 97.1831% ( 30) 00:07:21.755 18350.080 - 18450.905: 97.4252% ( 22) 00:07:21.755 18450.905 - 18551.729: 97.6012% ( 16) 00:07:21.755 18551.729 - 18652.554: 97.6893% ( 8) 00:07:21.755 18652.554 - 18753.378: 97.7553% ( 6) 00:07:21.755 18753.378 - 18854.203: 97.8103% ( 5) 00:07:21.755 18854.203 - 18955.028: 97.8543% ( 4) 00:07:21.755 18955.028 - 19055.852: 97.8873% ( 3) 00:07:21.755 19055.852 - 19156.677: 97.9533% ( 6) 00:07:21.755 19156.677 - 19257.502: 98.0744% ( 11) 00:07:21.755 19257.502 - 19358.326: 98.1074% ( 3) 00:07:21.755 19358.326 - 19459.151: 98.1624% ( 5) 00:07:21.755 19459.151 - 19559.975: 98.2284% ( 6) 00:07:21.755 19559.975 - 19660.800: 98.2945% ( 6) 00:07:21.755 19660.800 - 19761.625: 98.3495% ( 5) 00:07:21.755 19761.625 - 19862.449: 98.4155% ( 6) 00:07:21.755 19862.449 - 19963.274: 98.4815% ( 6) 00:07:21.755 19963.274 - 20064.098: 98.5365% ( 5) 00:07:21.755 20064.098 - 20164.923: 98.5915% ( 5) 00:07:21.755 20467.397 - 20568.222: 98.6246% ( 3) 00:07:21.755 20568.222 - 20669.046: 98.6686% ( 4) 00:07:21.755 20669.046 - 20769.871: 98.7236% ( 5) 00:07:21.755 20769.871 - 20870.695: 98.7676% ( 4) 00:07:21.755 20870.695 - 20971.520: 98.8116% ( 4) 00:07:21.755 20971.520 - 21072.345: 98.8666% ( 5) 00:07:21.755 21072.345 - 21173.169: 98.9107% ( 4) 00:07:21.755 21173.169 - 21273.994: 98.9547% ( 4) 00:07:21.755 21273.994 - 21374.818: 99.0097% ( 5) 00:07:21.755 21374.818 - 21475.643: 99.0537% ( 4) 00:07:21.755 21475.643 - 21576.468: 99.1087% ( 5) 00:07:21.755 21576.468 - 21677.292: 99.1527% ( 4) 00:07:21.755 21677.292 - 21778.117: 99.1967% ( 4) 00:07:21.755 21778.117 - 21878.942: 99.2408% ( 4) 00:07:21.755 21878.942 - 21979.766: 99.2958% ( 5) 00:07:21.755 27424.295 - 27625.945: 99.3618% ( 6) 00:07:21.755 27625.945 - 27827.594: 99.4388% ( 7) 00:07:21.755 27827.594 - 28029.243: 99.5268% ( 8) 00:07:21.755 28029.243 - 28230.892: 99.6149% ( 8) 00:07:21.755 28230.892 - 28432.542: 99.6919% ( 7) 00:07:21.755 28432.542 - 28634.191: 99.7799% ( 8) 00:07:21.755 28634.191 - 28835.840: 99.8570% ( 7) 00:07:21.755 28835.840 - 29037.489: 99.9450% ( 8) 00:07:21.755 29037.489 - 29239.138: 100.0000% ( 5) 00:07:21.755 00:07:21.755 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:21.755 ============================================================================== 00:07:21.755 Range in us Cumulative IO count 00:07:21.755 5620.972 - 5646.178: 0.0110% ( 1) 00:07:21.755 5646.178 - 5671.385: 0.0330% ( 2) 00:07:21.755 5671.385 - 5696.591: 0.0440% ( 1) 00:07:21.755 5696.591 - 5721.797: 0.0660% ( 2) 00:07:21.755 5721.797 - 5747.003: 0.0770% ( 1) 00:07:21.755 5747.003 - 5772.209: 0.0990% ( 2) 00:07:21.755 5797.415 - 5822.622: 0.1320% ( 3) 00:07:21.755 5847.828 - 5873.034: 0.2201% ( 8) 00:07:21.755 5873.034 - 5898.240: 0.3631% ( 13) 00:07:21.755 5898.240 - 5923.446: 0.4401% ( 7) 00:07:21.755 5923.446 - 5948.652: 0.4842% ( 4) 00:07:21.755 5948.652 - 5973.858: 0.5502% ( 6) 00:07:21.755 5973.858 - 5999.065: 0.6052% ( 5) 00:07:21.755 5999.065 - 6024.271: 0.6822% ( 7) 00:07:21.755 6024.271 - 6049.477: 0.7702% ( 8) 00:07:21.755 6049.477 - 6074.683: 0.8143% ( 4) 00:07:21.755 6074.683 - 6099.889: 0.9243% ( 10) 00:07:21.755 6099.889 - 6125.095: 0.9793% ( 5) 00:07:21.755 6125.095 - 6150.302: 1.0233% ( 4) 00:07:21.755 6150.302 - 6175.508: 1.1444% ( 11) 00:07:21.755 6175.508 - 6200.714: 1.2324% ( 8) 00:07:21.755 6200.714 - 6225.920: 1.3204% ( 8) 00:07:21.755 6225.920 - 6251.126: 1.3974% ( 7) 00:07:21.755 6251.126 - 6276.332: 1.5075% ( 10) 00:07:21.755 6276.332 - 6301.538: 1.5845% ( 7) 00:07:21.755 6301.538 - 6326.745: 1.6725% ( 8) 00:07:21.755 6326.745 - 6351.951: 1.7165% ( 4) 00:07:21.755 6351.951 - 6377.157: 1.8486% ( 12) 00:07:21.755 6377.157 - 6402.363: 1.9036% ( 5) 00:07:21.755 6402.363 - 6427.569: 1.9916% ( 8) 00:07:21.755 6427.569 - 6452.775: 2.0797% ( 8) 00:07:21.755 6452.775 - 6503.188: 2.2227% ( 13) 00:07:21.755 6503.188 - 6553.600: 2.4098% ( 17) 00:07:21.755 6553.600 - 6604.012: 2.5748% ( 15) 00:07:21.755 6604.012 - 6654.425: 2.7289% ( 14) 00:07:21.755 6654.425 - 6704.837: 2.8499% ( 11) 00:07:21.755 6704.837 - 6755.249: 3.0260% ( 16) 00:07:21.755 6755.249 - 6805.662: 3.1470% ( 11) 00:07:21.755 6805.662 - 6856.074: 3.2350% ( 8) 00:07:21.755 6856.074 - 6906.486: 3.2680% ( 3) 00:07:21.755 6906.486 - 6956.898: 3.3011% ( 3) 00:07:21.755 6956.898 - 7007.311: 3.3231% ( 2) 00:07:21.755 7007.311 - 7057.723: 3.3451% ( 2) 00:07:21.755 7057.723 - 7108.135: 3.3781% ( 3) 00:07:21.755 7108.135 - 7158.548: 3.4001% ( 2) 00:07:21.755 7158.548 - 7208.960: 3.4441% ( 4) 00:07:21.755 7208.960 - 7259.372: 3.4991% ( 5) 00:07:21.755 7259.372 - 7309.785: 3.5651% ( 6) 00:07:21.755 7309.785 - 7360.197: 3.6092% ( 4) 00:07:21.755 7360.197 - 7410.609: 3.6422% ( 3) 00:07:21.755 7410.609 - 7461.022: 3.6862% ( 4) 00:07:21.755 7461.022 - 7511.434: 3.6972% ( 1) 00:07:21.755 7511.434 - 7561.846: 3.7192% ( 2) 00:07:21.755 7561.846 - 7612.258: 3.7632% ( 4) 00:07:21.755 7612.258 - 7662.671: 3.8402% ( 7) 00:07:21.755 7662.671 - 7713.083: 3.8842% ( 4) 00:07:21.755 7713.083 - 7763.495: 3.9393% ( 5) 00:07:21.755 7763.495 - 7813.908: 3.9943% ( 5) 00:07:21.755 7813.908 - 7864.320: 4.0493% ( 5) 00:07:21.755 7864.320 - 7914.732: 4.1153% ( 6) 00:07:21.755 7914.732 - 7965.145: 4.1813% ( 6) 00:07:21.755 7965.145 - 8015.557: 4.2364% ( 5) 00:07:21.755 8015.557 - 8065.969: 4.2914% ( 5) 00:07:21.755 8065.969 - 8116.382: 4.3574% ( 6) 00:07:21.755 8116.382 - 8166.794: 4.4234% ( 6) 00:07:21.755 8166.794 - 8217.206: 4.4674% ( 4) 00:07:21.755 8217.206 - 8267.618: 4.5335% ( 6) 00:07:21.755 8267.618 - 8318.031: 4.5885% ( 5) 00:07:21.755 8318.031 - 8368.443: 4.6545% ( 6) 00:07:21.755 8368.443 - 8418.855: 4.6985% ( 4) 00:07:21.755 8418.855 - 8469.268: 4.7755% ( 7) 00:07:21.755 8469.268 - 8519.680: 4.8305% ( 5) 00:07:21.755 8519.680 - 8570.092: 4.8856% ( 5) 00:07:21.755 8570.092 - 8620.505: 4.9516% ( 6) 00:07:21.755 8620.505 - 8670.917: 4.9956% ( 4) 00:07:21.755 8670.917 - 8721.329: 5.0616% ( 6) 00:07:21.755 8721.329 - 8771.742: 5.0836% ( 2) 00:07:21.755 8771.742 - 8822.154: 5.1496% ( 6) 00:07:21.755 8822.154 - 8872.566: 5.1827% ( 3) 00:07:21.755 8872.566 - 8922.978: 5.2377% ( 5) 00:07:21.755 8922.978 - 8973.391: 5.3037% ( 6) 00:07:21.755 8973.391 - 9023.803: 5.3917% ( 8) 00:07:21.755 9023.803 - 9074.215: 5.4688% ( 7) 00:07:21.755 9074.215 - 9124.628: 5.5678% ( 9) 00:07:21.755 9124.628 - 9175.040: 5.6558% ( 8) 00:07:21.755 9175.040 - 9225.452: 5.7438% ( 8) 00:07:21.755 9225.452 - 9275.865: 5.8429% ( 9) 00:07:21.755 9275.865 - 9326.277: 5.9309% ( 8) 00:07:21.755 9326.277 - 9376.689: 5.9969% ( 6) 00:07:21.755 9376.689 - 9427.102: 6.1070% ( 10) 00:07:21.755 9427.102 - 9477.514: 6.2720% ( 15) 00:07:21.755 9477.514 - 9527.926: 6.3490% ( 7) 00:07:21.755 9527.926 - 9578.338: 6.4811% ( 12) 00:07:21.755 9578.338 - 9628.751: 6.5691% ( 8) 00:07:21.755 9628.751 - 9679.163: 6.7011% ( 12) 00:07:21.755 9679.163 - 9729.575: 6.8332% ( 12) 00:07:21.755 9729.575 - 9779.988: 6.9432% ( 10) 00:07:21.755 9779.988 - 9830.400: 7.0423% ( 9) 00:07:21.755 9830.400 - 9880.812: 7.1523% ( 10) 00:07:21.755 9880.812 - 9931.225: 7.2403% ( 8) 00:07:21.755 9931.225 - 9981.637: 7.3063% ( 6) 00:07:21.755 9981.637 - 10032.049: 7.3614% ( 5) 00:07:21.755 10032.049 - 10082.462: 7.3834% ( 2) 00:07:21.755 10082.462 - 10132.874: 7.4384% ( 5) 00:07:21.755 10132.874 - 10183.286: 7.4604% ( 2) 00:07:21.755 10183.286 - 10233.698: 7.4824% ( 2) 00:07:21.755 10233.698 - 10284.111: 7.5044% ( 2) 00:07:21.755 10284.111 - 10334.523: 7.5264% ( 2) 00:07:21.756 10334.523 - 10384.935: 7.5814% ( 5) 00:07:21.756 10384.935 - 10435.348: 7.6364% ( 5) 00:07:21.756 10435.348 - 10485.760: 7.6915% ( 5) 00:07:21.756 10485.760 - 10536.172: 7.7465% ( 5) 00:07:21.756 10536.172 - 10586.585: 7.7685% ( 2) 00:07:21.756 10586.585 - 10636.997: 7.8675% ( 9) 00:07:21.756 10636.997 - 10687.409: 7.9335% ( 6) 00:07:21.756 10687.409 - 10737.822: 7.9776% ( 4) 00:07:21.756 10737.822 - 10788.234: 8.0436% ( 6) 00:07:21.756 10788.234 - 10838.646: 8.0986% ( 5) 00:07:21.756 10838.646 - 10889.058: 8.1646% ( 6) 00:07:21.756 10889.058 - 10939.471: 8.2746% ( 10) 00:07:21.756 10939.471 - 10989.883: 8.3627% ( 8) 00:07:21.756 10989.883 - 11040.295: 8.4397% ( 7) 00:07:21.756 11040.295 - 11090.708: 8.5497% ( 10) 00:07:21.756 11090.708 - 11141.120: 8.6378% ( 8) 00:07:21.756 11141.120 - 11191.532: 8.7148% ( 7) 00:07:21.756 11191.532 - 11241.945: 8.8248% ( 10) 00:07:21.756 11241.945 - 11292.357: 8.9129% ( 8) 00:07:21.756 11292.357 - 11342.769: 9.0119% ( 9) 00:07:21.756 11342.769 - 11393.182: 9.0779% ( 6) 00:07:21.756 11393.182 - 11443.594: 9.1989% ( 11) 00:07:21.756 11443.594 - 11494.006: 9.4300% ( 21) 00:07:21.756 11494.006 - 11544.418: 9.6611% ( 21) 00:07:21.756 11544.418 - 11594.831: 9.9252% ( 24) 00:07:21.756 11594.831 - 11645.243: 10.2113% ( 26) 00:07:21.756 11645.243 - 11695.655: 10.4864% ( 25) 00:07:21.756 11695.655 - 11746.068: 10.9485% ( 42) 00:07:21.756 11746.068 - 11796.480: 11.2126% ( 24) 00:07:21.756 11796.480 - 11846.892: 11.5317% ( 29) 00:07:21.756 11846.892 - 11897.305: 11.8838% ( 32) 00:07:21.756 11897.305 - 11947.717: 12.3239% ( 40) 00:07:21.756 11947.717 - 11998.129: 12.7311% ( 37) 00:07:21.756 11998.129 - 12048.542: 13.2152% ( 44) 00:07:21.756 12048.542 - 12098.954: 13.6444% ( 39) 00:07:21.756 12098.954 - 12149.366: 14.0735% ( 39) 00:07:21.756 12149.366 - 12199.778: 14.5357% ( 42) 00:07:21.756 12199.778 - 12250.191: 15.1959% ( 60) 00:07:21.756 12250.191 - 12300.603: 15.9001% ( 64) 00:07:21.756 12300.603 - 12351.015: 16.4833% ( 53) 00:07:21.756 12351.015 - 12401.428: 17.2315% ( 68) 00:07:21.756 12401.428 - 12451.840: 18.0348% ( 73) 00:07:21.756 12451.840 - 12502.252: 18.7830% ( 68) 00:07:21.756 12502.252 - 12552.665: 19.6523% ( 79) 00:07:21.756 12552.665 - 12603.077: 20.4225% ( 70) 00:07:21.756 12603.077 - 12653.489: 21.2478% ( 75) 00:07:21.756 12653.489 - 12703.902: 22.1061% ( 78) 00:07:21.756 12703.902 - 12754.314: 22.8873% ( 71) 00:07:21.756 12754.314 - 12804.726: 23.8226% ( 85) 00:07:21.756 12804.726 - 12855.138: 24.8570% ( 94) 00:07:21.756 12855.138 - 12905.551: 25.6932% ( 76) 00:07:21.756 12905.551 - 13006.375: 27.4318% ( 158) 00:07:21.756 13006.375 - 13107.200: 29.3464% ( 174) 00:07:21.756 13107.200 - 13208.025: 31.1950% ( 168) 00:07:21.756 13208.025 - 13308.849: 33.1426% ( 177) 00:07:21.756 13308.849 - 13409.674: 35.0462% ( 173) 00:07:21.756 13409.674 - 13510.498: 36.9168% ( 170) 00:07:21.756 13510.498 - 13611.323: 38.6994% ( 162) 00:07:21.756 13611.323 - 13712.148: 40.4159% ( 156) 00:07:21.756 13712.148 - 13812.972: 42.2535% ( 167) 00:07:21.756 13812.972 - 13913.797: 44.3992% ( 195) 00:07:21.756 13913.797 - 14014.622: 46.3688% ( 179) 00:07:21.756 14014.622 - 14115.446: 48.5145% ( 195) 00:07:21.756 14115.446 - 14216.271: 50.3961% ( 171) 00:07:21.756 14216.271 - 14317.095: 52.2007% ( 164) 00:07:21.756 14317.095 - 14417.920: 54.1593% ( 178) 00:07:21.756 14417.920 - 14518.745: 56.1290% ( 179) 00:07:21.756 14518.745 - 14619.569: 58.1426% ( 183) 00:07:21.756 14619.569 - 14720.394: 60.0792% ( 176) 00:07:21.756 14720.394 - 14821.218: 62.2249% ( 195) 00:07:21.756 14821.218 - 14922.043: 64.0735% ( 168) 00:07:21.756 14922.043 - 15022.868: 65.8781% ( 164) 00:07:21.756 15022.868 - 15123.692: 67.5616% ( 153) 00:07:21.756 15123.692 - 15224.517: 69.3662% ( 164) 00:07:21.756 15224.517 - 15325.342: 70.9837% ( 147) 00:07:21.756 15325.342 - 15426.166: 72.6673% ( 153) 00:07:21.756 15426.166 - 15526.991: 74.2518% ( 144) 00:07:21.756 15526.991 - 15627.815: 75.5942% ( 122) 00:07:21.756 15627.815 - 15728.640: 77.2447% ( 150) 00:07:21.756 15728.640 - 15829.465: 78.5541% ( 119) 00:07:21.756 15829.465 - 15930.289: 79.8966% ( 122) 00:07:21.756 15930.289 - 16031.114: 81.3160% ( 129) 00:07:21.756 16031.114 - 16131.938: 82.2513% ( 85) 00:07:21.756 16131.938 - 16232.763: 83.0656% ( 74) 00:07:21.756 16232.763 - 16333.588: 83.9899% ( 84) 00:07:21.756 16333.588 - 16434.412: 85.0022% ( 92) 00:07:21.756 16434.412 - 16535.237: 85.9595% ( 87) 00:07:21.756 16535.237 - 16636.062: 86.8728% ( 83) 00:07:21.756 16636.062 - 16736.886: 87.7421% ( 79) 00:07:21.756 16736.886 - 16837.711: 88.5233% ( 71) 00:07:21.756 16837.711 - 16938.535: 89.2386% ( 65) 00:07:21.756 16938.535 - 17039.360: 89.9318% ( 63) 00:07:21.756 17039.360 - 17140.185: 90.6580% ( 66) 00:07:21.756 17140.185 - 17241.009: 91.3182% ( 60) 00:07:21.756 17241.009 - 17341.834: 92.0885% ( 70) 00:07:21.756 17341.834 - 17442.658: 92.9908% ( 82) 00:07:21.756 17442.658 - 17543.483: 93.8710% ( 80) 00:07:21.756 17543.483 - 17644.308: 94.3772% ( 46) 00:07:21.756 17644.308 - 17745.132: 95.0044% ( 57) 00:07:21.756 17745.132 - 17845.957: 95.3565% ( 32) 00:07:21.756 17845.957 - 17946.782: 95.7526% ( 36) 00:07:21.756 17946.782 - 18047.606: 96.0717% ( 29) 00:07:21.756 18047.606 - 18148.431: 96.4569% ( 35) 00:07:21.756 18148.431 - 18249.255: 96.7099% ( 23) 00:07:21.756 18249.255 - 18350.080: 96.9300% ( 20) 00:07:21.756 18350.080 - 18450.905: 97.1171% ( 17) 00:07:21.756 18450.905 - 18551.729: 97.4472% ( 30) 00:07:21.756 18551.729 - 18652.554: 97.7223% ( 25) 00:07:21.756 18652.554 - 18753.378: 97.7663% ( 4) 00:07:21.756 18753.378 - 18854.203: 97.7773% ( 1) 00:07:21.756 18854.203 - 18955.028: 97.8103% ( 3) 00:07:21.756 18955.028 - 19055.852: 97.9093% ( 9) 00:07:21.756 19055.852 - 19156.677: 97.9864% ( 7) 00:07:21.756 19156.677 - 19257.502: 98.0524% ( 6) 00:07:21.756 19257.502 - 19358.326: 98.0964% ( 4) 00:07:21.756 19358.326 - 19459.151: 98.1294% ( 3) 00:07:21.756 19459.151 - 19559.975: 98.2064% ( 7) 00:07:21.756 19559.975 - 19660.800: 98.2394% ( 3) 00:07:21.756 19660.800 - 19761.625: 98.3055% ( 6) 00:07:21.756 19761.625 - 19862.449: 98.3385% ( 3) 00:07:21.756 19862.449 - 19963.274: 98.4045% ( 6) 00:07:21.756 19963.274 - 20064.098: 98.4485% ( 4) 00:07:21.756 20064.098 - 20164.923: 98.5365% ( 8) 00:07:21.756 20164.923 - 20265.748: 98.6136% ( 7) 00:07:21.756 20265.748 - 20366.572: 98.7016% ( 8) 00:07:21.756 20366.572 - 20467.397: 98.7566% ( 5) 00:07:21.756 20467.397 - 20568.222: 98.7896% ( 3) 00:07:21.756 20568.222 - 20669.046: 98.8336% ( 4) 00:07:21.756 20669.046 - 20769.871: 98.8776% ( 4) 00:07:21.756 20769.871 - 20870.695: 98.9217% ( 4) 00:07:21.756 20870.695 - 20971.520: 98.9657% ( 4) 00:07:21.756 20971.520 - 21072.345: 99.0097% ( 4) 00:07:21.756 21072.345 - 21173.169: 99.0537% ( 4) 00:07:21.756 21173.169 - 21273.994: 99.0977% ( 4) 00:07:21.756 21273.994 - 21374.818: 99.1417% ( 4) 00:07:21.756 21374.818 - 21475.643: 99.1747% ( 3) 00:07:21.756 21475.643 - 21576.468: 99.2077% ( 3) 00:07:21.756 21576.468 - 21677.292: 99.2628% ( 5) 00:07:21.756 21677.292 - 21778.117: 99.2958% ( 3) 00:07:21.756 25811.102 - 26012.751: 99.3178% ( 2) 00:07:21.756 26012.751 - 26214.400: 99.3948% ( 7) 00:07:21.756 26214.400 - 26416.049: 99.4718% ( 7) 00:07:21.756 26416.049 - 26617.698: 99.5379% ( 6) 00:07:21.756 26617.698 - 26819.348: 99.6039% ( 6) 00:07:21.756 26819.348 - 27020.997: 99.6919% ( 8) 00:07:21.756 27020.997 - 27222.646: 99.7689% ( 7) 00:07:21.756 27222.646 - 27424.295: 99.8460% ( 7) 00:07:21.756 27424.295 - 27625.945: 99.9230% ( 7) 00:07:21.756 27625.945 - 27827.594: 99.9890% ( 6) 00:07:21.756 27827.594 - 28029.243: 100.0000% ( 1) 00:07:21.756 00:07:21.756 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:21.756 ============================================================================== 00:07:21.756 Range in us Cumulative IO count 00:07:21.756 5898.240 - 5923.446: 0.0220% ( 2) 00:07:21.756 5923.446 - 5948.652: 0.0880% ( 6) 00:07:21.756 5948.652 - 5973.858: 0.1540% ( 6) 00:07:21.756 5973.858 - 5999.065: 0.2421% ( 8) 00:07:21.756 5999.065 - 6024.271: 0.3411% ( 9) 00:07:21.756 6024.271 - 6049.477: 0.4291% ( 8) 00:07:21.756 6049.477 - 6074.683: 0.5062% ( 7) 00:07:21.756 6074.683 - 6099.889: 0.5502% ( 4) 00:07:21.756 6099.889 - 6125.095: 0.6272% ( 7) 00:07:21.756 6125.095 - 6150.302: 0.7042% ( 7) 00:07:21.756 6150.302 - 6175.508: 0.8473% ( 13) 00:07:21.756 6175.508 - 6200.714: 0.9463% ( 9) 00:07:21.756 6200.714 - 6225.920: 1.0783% ( 12) 00:07:21.756 6225.920 - 6251.126: 1.1554% ( 7) 00:07:21.756 6251.126 - 6276.332: 1.2654% ( 10) 00:07:21.756 6276.332 - 6301.538: 1.3534% ( 8) 00:07:21.756 6301.538 - 6326.745: 1.4525% ( 9) 00:07:21.756 6326.745 - 6351.951: 1.5405% ( 8) 00:07:21.756 6351.951 - 6377.157: 1.6505% ( 10) 00:07:21.756 6377.157 - 6402.363: 1.7496% ( 9) 00:07:21.756 6402.363 - 6427.569: 1.8486% ( 9) 00:07:21.756 6427.569 - 6452.775: 1.9476% ( 9) 00:07:21.756 6452.775 - 6503.188: 2.1567% ( 19) 00:07:21.756 6503.188 - 6553.600: 2.3548% ( 18) 00:07:21.756 6553.600 - 6604.012: 2.5638% ( 19) 00:07:21.756 6604.012 - 6654.425: 2.7509% ( 17) 00:07:21.756 6654.425 - 6704.837: 2.9489% ( 18) 00:07:21.756 6704.837 - 6755.249: 3.1140% ( 15) 00:07:21.756 6755.249 - 6805.662: 3.2130% ( 9) 00:07:21.756 6805.662 - 6856.074: 3.2901% ( 7) 00:07:21.756 6856.074 - 6906.486: 3.3671% ( 7) 00:07:21.756 6906.486 - 6956.898: 3.4551% ( 8) 00:07:21.757 6956.898 - 7007.311: 3.5211% ( 6) 00:07:21.757 7057.723 - 7108.135: 3.5541% ( 3) 00:07:21.757 7108.135 - 7158.548: 3.5871% ( 3) 00:07:21.757 7158.548 - 7208.960: 3.6202% ( 3) 00:07:21.757 7208.960 - 7259.372: 3.6532% ( 3) 00:07:21.757 7259.372 - 7309.785: 3.6972% ( 4) 00:07:21.757 7309.785 - 7360.197: 3.7302% ( 3) 00:07:21.757 7360.197 - 7410.609: 3.7522% ( 2) 00:07:21.757 7410.609 - 7461.022: 3.7962% ( 4) 00:07:21.757 7461.022 - 7511.434: 3.8292% ( 3) 00:07:21.757 7511.434 - 7561.846: 3.8622% ( 3) 00:07:21.757 7561.846 - 7612.258: 3.8952% ( 3) 00:07:21.757 7612.258 - 7662.671: 3.9283% ( 3) 00:07:21.757 7662.671 - 7713.083: 3.9613% ( 3) 00:07:21.757 7713.083 - 7763.495: 3.9943% ( 3) 00:07:21.757 7763.495 - 7813.908: 4.0273% ( 3) 00:07:21.757 7813.908 - 7864.320: 4.0603% ( 3) 00:07:21.757 7864.320 - 7914.732: 4.0823% ( 2) 00:07:21.757 7914.732 - 7965.145: 4.1373% ( 5) 00:07:21.757 7965.145 - 8015.557: 4.2254% ( 8) 00:07:21.757 8015.557 - 8065.969: 4.2914% ( 6) 00:07:21.757 8065.969 - 8116.382: 4.3574% ( 6) 00:07:21.757 8116.382 - 8166.794: 4.3904% ( 3) 00:07:21.757 8166.794 - 8217.206: 4.4124% ( 2) 00:07:21.757 8217.206 - 8267.618: 4.4454% ( 3) 00:07:21.757 8267.618 - 8318.031: 4.4894% ( 4) 00:07:21.757 8318.031 - 8368.443: 4.5224% ( 3) 00:07:21.757 8368.443 - 8418.855: 4.5885% ( 6) 00:07:21.757 8418.855 - 8469.268: 4.6765% ( 8) 00:07:21.757 8469.268 - 8519.680: 4.7315% ( 5) 00:07:21.757 8519.680 - 8570.092: 4.7975% ( 6) 00:07:21.757 8570.092 - 8620.505: 4.8856% ( 8) 00:07:21.757 8620.505 - 8670.917: 4.9846% ( 9) 00:07:21.757 8670.917 - 8721.329: 5.0836% ( 9) 00:07:21.757 8721.329 - 8771.742: 5.1827% ( 9) 00:07:21.757 8771.742 - 8822.154: 5.2817% ( 9) 00:07:21.757 8822.154 - 8872.566: 5.3807% ( 9) 00:07:21.757 8872.566 - 8922.978: 5.4688% ( 8) 00:07:21.757 8922.978 - 8973.391: 5.5238% ( 5) 00:07:21.757 8973.391 - 9023.803: 5.5898% ( 6) 00:07:21.757 9023.803 - 9074.215: 5.6668% ( 7) 00:07:21.757 9074.215 - 9124.628: 5.7328% ( 6) 00:07:21.757 9124.628 - 9175.040: 5.7879% ( 5) 00:07:21.757 9175.040 - 9225.452: 5.8539% ( 6) 00:07:21.757 9225.452 - 9275.865: 5.8979% ( 4) 00:07:21.757 9275.865 - 9326.277: 5.9639% ( 6) 00:07:21.757 9326.277 - 9376.689: 6.0079% ( 4) 00:07:21.757 9376.689 - 9427.102: 6.0629% ( 5) 00:07:21.757 9427.102 - 9477.514: 6.1400% ( 7) 00:07:21.757 9477.514 - 9527.926: 6.2390% ( 9) 00:07:21.757 9527.926 - 9578.338: 6.3490% ( 10) 00:07:21.757 9578.338 - 9628.751: 6.5581% ( 19) 00:07:21.757 9628.751 - 9679.163: 6.6791% ( 11) 00:07:21.757 9679.163 - 9729.575: 6.7452% ( 6) 00:07:21.757 9729.575 - 9779.988: 6.7892% ( 4) 00:07:21.757 9779.988 - 9830.400: 6.8552% ( 6) 00:07:21.757 9830.400 - 9880.812: 6.9212% ( 6) 00:07:21.757 9880.812 - 9931.225: 6.9872% ( 6) 00:07:21.757 9931.225 - 9981.637: 7.0643% ( 7) 00:07:21.757 9981.637 - 10032.049: 7.1413% ( 7) 00:07:21.757 10032.049 - 10082.462: 7.2293% ( 8) 00:07:21.757 10082.462 - 10132.874: 7.3173% ( 8) 00:07:21.757 10132.874 - 10183.286: 7.3944% ( 7) 00:07:21.757 10183.286 - 10233.698: 7.4604% ( 6) 00:07:21.757 10233.698 - 10284.111: 7.5154% ( 5) 00:07:21.757 10284.111 - 10334.523: 7.5814% ( 6) 00:07:21.757 10334.523 - 10384.935: 7.6144% ( 3) 00:07:21.757 10384.935 - 10435.348: 7.6474% ( 3) 00:07:21.757 10435.348 - 10485.760: 7.6805% ( 3) 00:07:21.757 10485.760 - 10536.172: 7.7135% ( 3) 00:07:21.757 10536.172 - 10586.585: 7.7465% ( 3) 00:07:21.757 10586.585 - 10636.997: 7.7905% ( 4) 00:07:21.757 10636.997 - 10687.409: 7.8015% ( 1) 00:07:21.757 10687.409 - 10737.822: 7.8345% ( 3) 00:07:21.757 10737.822 - 10788.234: 7.8675% ( 3) 00:07:21.757 10788.234 - 10838.646: 7.9115% ( 4) 00:07:21.757 10838.646 - 10889.058: 7.9886% ( 7) 00:07:21.757 10889.058 - 10939.471: 8.0216% ( 3) 00:07:21.757 10939.471 - 10989.883: 8.0436% ( 2) 00:07:21.757 10989.883 - 11040.295: 8.0766% ( 3) 00:07:21.757 11040.295 - 11090.708: 8.1426% ( 6) 00:07:21.757 11090.708 - 11141.120: 8.1756% ( 3) 00:07:21.757 11141.120 - 11191.532: 8.2526% ( 7) 00:07:21.757 11191.532 - 11241.945: 8.3847% ( 12) 00:07:21.757 11241.945 - 11292.357: 8.5497% ( 15) 00:07:21.757 11292.357 - 11342.769: 8.6708% ( 11) 00:07:21.757 11342.769 - 11393.182: 8.8138% ( 13) 00:07:21.757 11393.182 - 11443.594: 9.0449% ( 21) 00:07:21.757 11443.594 - 11494.006: 9.2760% ( 21) 00:07:21.757 11494.006 - 11544.418: 9.5511% ( 25) 00:07:21.757 11544.418 - 11594.831: 9.8812% ( 30) 00:07:21.757 11594.831 - 11645.243: 10.2113% ( 30) 00:07:21.757 11645.243 - 11695.655: 10.5304% ( 29) 00:07:21.757 11695.655 - 11746.068: 10.9045% ( 34) 00:07:21.757 11746.068 - 11796.480: 11.2896% ( 35) 00:07:21.757 11796.480 - 11846.892: 11.6527% ( 33) 00:07:21.757 11846.892 - 11897.305: 12.0929% ( 40) 00:07:21.757 11897.305 - 11947.717: 12.5330% ( 40) 00:07:21.757 11947.717 - 11998.129: 12.9181% ( 35) 00:07:21.757 11998.129 - 12048.542: 13.2923% ( 34) 00:07:21.757 12048.542 - 12098.954: 13.7214% ( 39) 00:07:21.757 12098.954 - 12149.366: 14.1395% ( 38) 00:07:21.757 12149.366 - 12199.778: 14.6787% ( 49) 00:07:21.757 12199.778 - 12250.191: 15.2179% ( 49) 00:07:21.757 12250.191 - 12300.603: 15.8781% ( 60) 00:07:21.757 12300.603 - 12351.015: 16.5603% ( 62) 00:07:21.757 12351.015 - 12401.428: 17.2755% ( 65) 00:07:21.757 12401.428 - 12451.840: 18.1228% ( 77) 00:07:21.757 12451.840 - 12502.252: 18.9261% ( 73) 00:07:21.757 12502.252 - 12552.665: 19.7293% ( 73) 00:07:21.757 12552.665 - 12603.077: 20.5766% ( 77) 00:07:21.757 12603.077 - 12653.489: 21.3358% ( 69) 00:07:21.757 12653.489 - 12703.902: 22.1391% ( 73) 00:07:21.757 12703.902 - 12754.314: 22.9754% ( 76) 00:07:21.757 12754.314 - 12804.726: 23.7126% ( 67) 00:07:21.757 12804.726 - 12855.138: 24.4938% ( 71) 00:07:21.757 12855.138 - 12905.551: 25.3191% ( 75) 00:07:21.757 12905.551 - 13006.375: 27.0577% ( 158) 00:07:21.757 13006.375 - 13107.200: 28.8842% ( 166) 00:07:21.757 13107.200 - 13208.025: 30.6338% ( 159) 00:07:21.757 13208.025 - 13308.849: 32.1413% ( 137) 00:07:21.757 13308.849 - 13409.674: 33.7588% ( 147) 00:07:21.757 13409.674 - 13510.498: 35.3213% ( 142) 00:07:21.757 13510.498 - 13611.323: 36.9498% ( 148) 00:07:21.757 13611.323 - 13712.148: 38.7324% ( 162) 00:07:21.757 13712.148 - 13812.972: 40.4710% ( 158) 00:07:21.757 13812.972 - 13913.797: 42.4296% ( 178) 00:07:21.757 13913.797 - 14014.622: 44.6963% ( 206) 00:07:21.757 14014.622 - 14115.446: 47.0180% ( 211) 00:07:21.757 14115.446 - 14216.271: 49.1527% ( 194) 00:07:21.757 14216.271 - 14317.095: 51.4965% ( 213) 00:07:21.757 14317.095 - 14417.920: 53.8842% ( 217) 00:07:21.757 14417.920 - 14518.745: 56.2280% ( 213) 00:07:21.757 14518.745 - 14619.569: 58.4067% ( 198) 00:07:21.757 14619.569 - 14720.394: 60.5964% ( 199) 00:07:21.757 14720.394 - 14821.218: 62.7971% ( 200) 00:07:21.757 14821.218 - 14922.043: 64.8988% ( 191) 00:07:21.757 14922.043 - 15022.868: 66.8464% ( 177) 00:07:21.757 15022.868 - 15123.692: 68.7280% ( 171) 00:07:21.757 15123.692 - 15224.517: 70.5656% ( 167) 00:07:21.757 15224.517 - 15325.342: 72.2161% ( 150) 00:07:21.757 15325.342 - 15426.166: 73.7346% ( 138) 00:07:21.757 15426.166 - 15526.991: 75.2311% ( 136) 00:07:21.757 15526.991 - 15627.815: 76.6835% ( 132) 00:07:21.757 15627.815 - 15728.640: 78.2350% ( 141) 00:07:21.757 15728.640 - 15829.465: 79.6105% ( 125) 00:07:21.757 15829.465 - 15930.289: 80.7879% ( 107) 00:07:21.757 15930.289 - 16031.114: 81.7562% ( 88) 00:07:21.757 16031.114 - 16131.938: 82.6585% ( 82) 00:07:21.757 16131.938 - 16232.763: 83.4947% ( 76) 00:07:21.757 16232.763 - 16333.588: 84.2540% ( 69) 00:07:21.757 16333.588 - 16434.412: 85.0022% ( 68) 00:07:21.757 16434.412 - 16535.237: 86.1466% ( 104) 00:07:21.757 16535.237 - 16636.062: 87.2359% ( 99) 00:07:21.757 16636.062 - 16736.886: 88.1602% ( 84) 00:07:21.757 16736.886 - 16837.711: 88.9525% ( 72) 00:07:21.757 16837.711 - 16938.535: 89.9868% ( 94) 00:07:21.757 16938.535 - 17039.360: 91.0321% ( 95) 00:07:21.757 17039.360 - 17140.185: 91.9234% ( 81) 00:07:21.757 17140.185 - 17241.009: 92.7927% ( 79) 00:07:21.757 17241.009 - 17341.834: 93.4859% ( 63) 00:07:21.757 17341.834 - 17442.658: 94.1571% ( 61) 00:07:21.757 17442.658 - 17543.483: 94.7623% ( 55) 00:07:21.757 17543.483 - 17644.308: 95.2465% ( 44) 00:07:21.757 17644.308 - 17745.132: 95.6536% ( 37) 00:07:21.757 17745.132 - 17845.957: 96.0277% ( 34) 00:07:21.757 17845.957 - 17946.782: 96.3468% ( 29) 00:07:21.757 17946.782 - 18047.606: 96.5339% ( 17) 00:07:21.757 18047.606 - 18148.431: 96.6879% ( 14) 00:07:21.757 18148.431 - 18249.255: 96.7980% ( 10) 00:07:21.757 18249.255 - 18350.080: 96.9410% ( 13) 00:07:21.757 18350.080 - 18450.905: 97.1171% ( 16) 00:07:21.757 18450.905 - 18551.729: 97.2381% ( 11) 00:07:21.757 18551.729 - 18652.554: 97.3812% ( 13) 00:07:21.757 18652.554 - 18753.378: 97.5242% ( 13) 00:07:21.757 18753.378 - 18854.203: 97.6562% ( 12) 00:07:21.757 18854.203 - 18955.028: 97.7773% ( 11) 00:07:21.757 18955.028 - 19055.852: 97.8873% ( 10) 00:07:21.757 19055.852 - 19156.677: 97.9974% ( 10) 00:07:21.757 19156.677 - 19257.502: 98.1074% ( 10) 00:07:21.757 19257.502 - 19358.326: 98.1624% ( 5) 00:07:21.757 19358.326 - 19459.151: 98.1954% ( 3) 00:07:21.757 19459.151 - 19559.975: 98.2394% ( 4) 00:07:21.757 19559.975 - 19660.800: 98.2724% ( 3) 00:07:21.757 19660.800 - 19761.625: 98.3165% ( 4) 00:07:21.757 19761.625 - 19862.449: 98.4485% ( 12) 00:07:21.757 19862.449 - 19963.274: 98.5365% ( 8) 00:07:21.757 19963.274 - 20064.098: 98.6356% ( 9) 00:07:21.757 20064.098 - 20164.923: 98.7346% ( 9) 00:07:21.757 20164.923 - 20265.748: 98.8336% ( 9) 00:07:21.757 20265.748 - 20366.572: 98.9437% ( 10) 00:07:21.757 20366.572 - 20467.397: 99.0317% ( 8) 00:07:21.757 20467.397 - 20568.222: 99.0977% ( 6) 00:07:21.757 20568.222 - 20669.046: 99.1527% ( 5) 00:07:21.757 20669.046 - 20769.871: 99.1967% ( 4) 00:07:21.757 20769.871 - 20870.695: 99.2408% ( 4) 00:07:21.757 20870.695 - 20971.520: 99.2958% ( 5) 00:07:21.757 24399.557 - 24500.382: 99.3068% ( 1) 00:07:21.757 24500.382 - 24601.206: 99.3508% ( 4) 00:07:21.758 24601.206 - 24702.031: 99.3838% ( 3) 00:07:21.758 24702.031 - 24802.855: 99.4278% ( 4) 00:07:21.758 24802.855 - 24903.680: 99.4608% ( 3) 00:07:21.758 24903.680 - 25004.505: 99.4938% ( 3) 00:07:21.758 25004.505 - 25105.329: 99.5268% ( 3) 00:07:21.758 25105.329 - 25206.154: 99.5709% ( 4) 00:07:21.758 25206.154 - 25306.978: 99.6149% ( 4) 00:07:21.758 25306.978 - 25407.803: 99.6479% ( 3) 00:07:21.758 25407.803 - 25508.628: 99.6919% ( 4) 00:07:21.758 25508.628 - 25609.452: 99.7359% ( 4) 00:07:21.758 25609.452 - 25710.277: 99.7689% ( 3) 00:07:21.758 25710.277 - 25811.102: 99.8129% ( 4) 00:07:21.758 25811.102 - 26012.751: 99.9010% ( 8) 00:07:21.758 26012.751 - 26214.400: 99.9780% ( 7) 00:07:21.758 26214.400 - 26416.049: 100.0000% ( 2) 00:07:21.758 00:07:21.758 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:21.758 ============================================================================== 00:07:21.758 Range in us Cumulative IO count 00:07:21.758 5847.828 - 5873.034: 0.0330% ( 3) 00:07:21.758 5873.034 - 5898.240: 0.0440% ( 1) 00:07:21.758 5898.240 - 5923.446: 0.0770% ( 3) 00:07:21.758 5923.446 - 5948.652: 0.1430% ( 6) 00:07:21.758 5948.652 - 5973.858: 0.2091% ( 6) 00:07:21.758 5973.858 - 5999.065: 0.3521% ( 13) 00:07:21.758 5999.065 - 6024.271: 0.4511% ( 9) 00:07:21.758 6024.271 - 6049.477: 0.5172% ( 6) 00:07:21.758 6049.477 - 6074.683: 0.5942% ( 7) 00:07:21.758 6074.683 - 6099.889: 0.6272% ( 3) 00:07:21.758 6099.889 - 6125.095: 0.6492% ( 2) 00:07:21.758 6125.095 - 6150.302: 0.7592% ( 10) 00:07:21.758 6150.302 - 6175.508: 0.8913% ( 12) 00:07:21.758 6175.508 - 6200.714: 1.0343% ( 13) 00:07:21.758 6200.714 - 6225.920: 1.1114% ( 7) 00:07:21.758 6225.920 - 6251.126: 1.1774% ( 6) 00:07:21.758 6251.126 - 6276.332: 1.2654% ( 8) 00:07:21.758 6276.332 - 6301.538: 1.3204% ( 5) 00:07:21.758 6301.538 - 6326.745: 1.4085% ( 8) 00:07:21.758 6326.745 - 6351.951: 1.4965% ( 8) 00:07:21.758 6351.951 - 6377.157: 1.5735% ( 7) 00:07:21.758 6377.157 - 6402.363: 1.6395% ( 6) 00:07:21.758 6402.363 - 6427.569: 1.7276% ( 8) 00:07:21.758 6427.569 - 6452.775: 1.8156% ( 8) 00:07:21.758 6452.775 - 6503.188: 1.9696% ( 14) 00:07:21.758 6503.188 - 6553.600: 2.1457% ( 16) 00:07:21.758 6553.600 - 6604.012: 2.2997% ( 14) 00:07:21.758 6604.012 - 6654.425: 2.4538% ( 14) 00:07:21.758 6654.425 - 6704.837: 2.5858% ( 12) 00:07:21.758 6704.837 - 6755.249: 2.6408% ( 5) 00:07:21.758 6755.249 - 6805.662: 2.6739% ( 3) 00:07:21.758 6805.662 - 6856.074: 2.7069% ( 3) 00:07:21.758 6856.074 - 6906.486: 2.7509% ( 4) 00:07:21.758 6906.486 - 6956.898: 2.7839% ( 3) 00:07:21.758 6956.898 - 7007.311: 2.8059% ( 2) 00:07:21.758 7007.311 - 7057.723: 2.8169% ( 1) 00:07:21.758 7108.135 - 7158.548: 2.8829% ( 6) 00:07:21.758 7158.548 - 7208.960: 2.9599% ( 7) 00:07:21.758 7208.960 - 7259.372: 3.1140% ( 14) 00:07:21.758 7259.372 - 7309.785: 3.1910% ( 7) 00:07:21.758 7309.785 - 7360.197: 3.2460% ( 5) 00:07:21.758 7360.197 - 7410.609: 3.3011% ( 5) 00:07:21.758 7410.609 - 7461.022: 3.3781% ( 7) 00:07:21.758 7461.022 - 7511.434: 3.4661% ( 8) 00:07:21.758 7511.434 - 7561.846: 3.5431% ( 7) 00:07:21.758 7561.846 - 7612.258: 3.6202% ( 7) 00:07:21.758 7612.258 - 7662.671: 3.6752% ( 5) 00:07:21.758 7662.671 - 7713.083: 3.7412% ( 6) 00:07:21.758 7713.083 - 7763.495: 3.8182% ( 7) 00:07:21.758 7763.495 - 7813.908: 3.9283% ( 10) 00:07:21.758 7813.908 - 7864.320: 4.0493% ( 11) 00:07:21.758 7864.320 - 7914.732: 4.1593% ( 10) 00:07:21.758 7914.732 - 7965.145: 4.2474% ( 8) 00:07:21.758 7965.145 - 8015.557: 4.3354% ( 8) 00:07:21.758 8015.557 - 8065.969: 4.4344% ( 9) 00:07:21.758 8065.969 - 8116.382: 4.5445% ( 10) 00:07:21.758 8116.382 - 8166.794: 4.6105% ( 6) 00:07:21.758 8166.794 - 8217.206: 4.6875% ( 7) 00:07:21.758 8217.206 - 8267.618: 4.7975% ( 10) 00:07:21.758 8267.618 - 8318.031: 4.8526% ( 5) 00:07:21.758 8318.031 - 8368.443: 4.9076% ( 5) 00:07:21.758 8368.443 - 8418.855: 4.9736% ( 6) 00:07:21.758 8418.855 - 8469.268: 5.0286% ( 5) 00:07:21.758 8469.268 - 8519.680: 5.0726% ( 4) 00:07:21.758 8519.680 - 8570.092: 5.1276% ( 5) 00:07:21.758 8570.092 - 8620.505: 5.2157% ( 8) 00:07:21.758 8620.505 - 8670.917: 5.3477% ( 12) 00:07:21.758 8670.917 - 8721.329: 5.4357% ( 8) 00:07:21.758 8721.329 - 8771.742: 5.5238% ( 8) 00:07:21.758 8771.742 - 8822.154: 5.6558% ( 12) 00:07:21.758 8822.154 - 8872.566: 5.7658% ( 10) 00:07:21.758 8872.566 - 8922.978: 5.8869% ( 11) 00:07:21.758 8922.978 - 8973.391: 5.9969% ( 10) 00:07:21.758 8973.391 - 9023.803: 6.1290% ( 12) 00:07:21.758 9023.803 - 9074.215: 6.2390% ( 10) 00:07:21.758 9074.215 - 9124.628: 6.3050% ( 6) 00:07:21.758 9124.628 - 9175.040: 6.3710% ( 6) 00:07:21.758 9175.040 - 9225.452: 6.4151% ( 4) 00:07:21.758 9225.452 - 9275.865: 6.4811% ( 6) 00:07:21.758 9275.865 - 9326.277: 6.5471% ( 6) 00:07:21.758 9326.277 - 9376.689: 6.6131% ( 6) 00:07:21.758 9376.689 - 9427.102: 6.6681% ( 5) 00:07:21.758 9427.102 - 9477.514: 6.7342% ( 6) 00:07:21.758 9477.514 - 9527.926: 6.8002% ( 6) 00:07:21.758 9527.926 - 9578.338: 6.8662% ( 6) 00:07:21.758 9578.338 - 9628.751: 6.8882% ( 2) 00:07:21.758 9628.751 - 9679.163: 6.9212% ( 3) 00:07:21.758 9679.163 - 9729.575: 6.9762% ( 5) 00:07:21.758 9729.575 - 9779.988: 7.0533% ( 7) 00:07:21.758 9779.988 - 9830.400: 7.1083% ( 5) 00:07:21.758 9830.400 - 9880.812: 7.1633% ( 5) 00:07:21.758 9880.812 - 9931.225: 7.1963% ( 3) 00:07:21.758 9931.225 - 9981.637: 7.2403% ( 4) 00:07:21.758 9981.637 - 10032.049: 7.2953% ( 5) 00:07:21.758 10032.049 - 10082.462: 7.3173% ( 2) 00:07:21.758 10082.462 - 10132.874: 7.3504% ( 3) 00:07:21.758 10132.874 - 10183.286: 7.3724% ( 2) 00:07:21.758 10183.286 - 10233.698: 7.3944% ( 2) 00:07:21.758 10233.698 - 10284.111: 7.4274% ( 3) 00:07:21.758 10284.111 - 10334.523: 7.4604% ( 3) 00:07:21.758 10334.523 - 10384.935: 7.5044% ( 4) 00:07:21.758 10384.935 - 10435.348: 7.6034% ( 9) 00:07:21.758 10435.348 - 10485.760: 7.7135% ( 10) 00:07:21.758 10485.760 - 10536.172: 7.8345% ( 11) 00:07:21.758 10536.172 - 10586.585: 7.9665% ( 12) 00:07:21.758 10586.585 - 10636.997: 8.0436% ( 7) 00:07:21.758 10636.997 - 10687.409: 8.1316% ( 8) 00:07:21.758 10687.409 - 10737.822: 8.2306% ( 9) 00:07:21.758 10737.822 - 10788.234: 8.3187% ( 8) 00:07:21.758 10788.234 - 10838.646: 8.4177% ( 9) 00:07:21.758 10838.646 - 10889.058: 8.5167% ( 9) 00:07:21.758 10889.058 - 10939.471: 8.7148% ( 18) 00:07:21.758 10939.471 - 10989.883: 8.8798% ( 15) 00:07:21.758 10989.883 - 11040.295: 9.0119% ( 12) 00:07:21.758 11040.295 - 11090.708: 9.1769% ( 15) 00:07:21.758 11090.708 - 11141.120: 9.3310% ( 14) 00:07:21.758 11141.120 - 11191.532: 9.4850% ( 14) 00:07:21.758 11191.532 - 11241.945: 9.6281% ( 13) 00:07:21.758 11241.945 - 11292.357: 9.7381% ( 10) 00:07:21.758 11292.357 - 11342.769: 9.8702% ( 12) 00:07:21.758 11342.769 - 11393.182: 10.0792% ( 19) 00:07:21.758 11393.182 - 11443.594: 10.3103% ( 21) 00:07:21.758 11443.594 - 11494.006: 10.4643% ( 14) 00:07:21.758 11494.006 - 11544.418: 10.6734% ( 19) 00:07:21.758 11544.418 - 11594.831: 10.8385% ( 15) 00:07:21.758 11594.831 - 11645.243: 11.0805% ( 22) 00:07:21.758 11645.243 - 11695.655: 11.3556% ( 25) 00:07:21.758 11695.655 - 11746.068: 11.6307% ( 25) 00:07:21.758 11746.068 - 11796.480: 11.8948% ( 24) 00:07:21.758 11796.480 - 11846.892: 12.2249% ( 30) 00:07:21.758 11846.892 - 11897.305: 12.5110% ( 26) 00:07:21.758 11897.305 - 11947.717: 12.8521% ( 31) 00:07:21.758 11947.717 - 11998.129: 13.1932% ( 31) 00:07:21.758 11998.129 - 12048.542: 13.5893% ( 36) 00:07:21.758 12048.542 - 12098.954: 14.0735% ( 44) 00:07:21.758 12098.954 - 12149.366: 14.5577% ( 44) 00:07:21.758 12149.366 - 12199.778: 15.0748% ( 47) 00:07:21.758 12199.778 - 12250.191: 15.6470% ( 52) 00:07:21.758 12250.191 - 12300.603: 16.3292% ( 62) 00:07:21.758 12300.603 - 12351.015: 16.9124% ( 53) 00:07:21.758 12351.015 - 12401.428: 17.4406% ( 48) 00:07:21.758 12401.428 - 12451.840: 17.9577% ( 47) 00:07:21.758 12451.840 - 12502.252: 18.4859% ( 48) 00:07:21.758 12502.252 - 12552.665: 18.9701% ( 44) 00:07:21.758 12552.665 - 12603.077: 19.4542% ( 44) 00:07:21.759 12603.077 - 12653.489: 19.9824% ( 48) 00:07:21.759 12653.489 - 12703.902: 20.5986% ( 56) 00:07:21.759 12703.902 - 12754.314: 21.2478% ( 59) 00:07:21.759 12754.314 - 12804.726: 21.8310% ( 53) 00:07:21.759 12804.726 - 12855.138: 22.6122% ( 71) 00:07:21.759 12855.138 - 12905.551: 23.3935% ( 71) 00:07:21.759 12905.551 - 13006.375: 24.9670% ( 143) 00:07:21.759 13006.375 - 13107.200: 26.4965% ( 139) 00:07:21.759 13107.200 - 13208.025: 28.1140% ( 147) 00:07:21.759 13208.025 - 13308.849: 29.7975% ( 153) 00:07:21.759 13308.849 - 13409.674: 31.6901% ( 172) 00:07:21.759 13409.674 - 13510.498: 34.0119% ( 211) 00:07:21.759 13510.498 - 13611.323: 36.4437% ( 221) 00:07:21.759 13611.323 - 13712.148: 38.8864% ( 222) 00:07:21.759 13712.148 - 13812.972: 41.2742% ( 217) 00:07:21.759 13812.972 - 13913.797: 43.7060% ( 221) 00:07:21.759 13913.797 - 14014.622: 45.9287% ( 202) 00:07:21.759 14014.622 - 14115.446: 48.0964% ( 197) 00:07:21.759 14115.446 - 14216.271: 50.2751% ( 198) 00:07:21.759 14216.271 - 14317.095: 52.1897% ( 174) 00:07:21.759 14317.095 - 14417.920: 54.1043% ( 174) 00:07:21.759 14417.920 - 14518.745: 56.0519% ( 177) 00:07:21.759 14518.745 - 14619.569: 57.9886% ( 176) 00:07:21.759 14619.569 - 14720.394: 59.9032% ( 174) 00:07:21.759 14720.394 - 14821.218: 61.8728% ( 179) 00:07:21.759 14821.218 - 14922.043: 63.7654% ( 172) 00:07:21.759 14922.043 - 15022.868: 65.7350% ( 179) 00:07:21.759 15022.868 - 15123.692: 67.7047% ( 179) 00:07:21.759 15123.692 - 15224.517: 69.4432% ( 158) 00:07:21.759 15224.517 - 15325.342: 71.3358% ( 172) 00:07:21.759 15325.342 - 15426.166: 73.0964% ( 160) 00:07:21.759 15426.166 - 15526.991: 74.8019% ( 155) 00:07:21.759 15526.991 - 15627.815: 76.4855% ( 153) 00:07:21.759 15627.815 - 15728.640: 77.8609% ( 125) 00:07:21.759 15728.640 - 15829.465: 79.3024% ( 131) 00:07:21.759 15829.465 - 15930.289: 80.6228% ( 120) 00:07:21.759 15930.289 - 16031.114: 81.8992% ( 116) 00:07:21.759 16031.114 - 16131.938: 83.0876% ( 108) 00:07:21.759 16131.938 - 16232.763: 84.3420% ( 114) 00:07:21.759 16232.763 - 16333.588: 85.4423% ( 100) 00:07:21.759 16333.588 - 16434.412: 86.3116% ( 79) 00:07:21.759 16434.412 - 16535.237: 87.2249% ( 83) 00:07:21.759 16535.237 - 16636.062: 88.0502% ( 75) 00:07:21.759 16636.062 - 16736.886: 88.8314% ( 71) 00:07:21.759 16736.886 - 16837.711: 89.7997% ( 88) 00:07:21.759 16837.711 - 16938.535: 90.7240% ( 84) 00:07:21.759 16938.535 - 17039.360: 91.4393% ( 65) 00:07:21.759 17039.360 - 17140.185: 92.1105% ( 61) 00:07:21.759 17140.185 - 17241.009: 92.7817% ( 61) 00:07:21.759 17241.009 - 17341.834: 93.4199% ( 58) 00:07:21.759 17341.834 - 17442.658: 94.0801% ( 60) 00:07:21.759 17442.658 - 17543.483: 94.6413% ( 51) 00:07:21.759 17543.483 - 17644.308: 95.2245% ( 53) 00:07:21.759 17644.308 - 17745.132: 95.5986% ( 34) 00:07:21.759 17745.132 - 17845.957: 95.9507% ( 32) 00:07:21.759 17845.957 - 17946.782: 96.3248% ( 34) 00:07:21.759 17946.782 - 18047.606: 96.5339% ( 19) 00:07:21.759 18047.606 - 18148.431: 96.6329% ( 9) 00:07:21.759 18148.431 - 18249.255: 96.7760% ( 13) 00:07:21.759 18249.255 - 18350.080: 97.0511% ( 25) 00:07:21.759 18350.080 - 18450.905: 97.2381% ( 17) 00:07:21.759 18450.905 - 18551.729: 97.4472% ( 19) 00:07:21.759 18551.729 - 18652.554: 97.6452% ( 18) 00:07:21.759 18652.554 - 18753.378: 97.8103% ( 15) 00:07:21.759 18753.378 - 18854.203: 97.9754% ( 15) 00:07:21.759 18854.203 - 18955.028: 98.1734% ( 18) 00:07:21.759 18955.028 - 19055.852: 98.3825% ( 19) 00:07:21.759 19055.852 - 19156.677: 98.5145% ( 12) 00:07:21.759 19156.677 - 19257.502: 98.7126% ( 18) 00:07:21.759 19257.502 - 19358.326: 98.8776% ( 15) 00:07:21.759 19358.326 - 19459.151: 99.0207% ( 13) 00:07:21.759 19459.151 - 19559.975: 99.0977% ( 7) 00:07:21.759 19559.975 - 19660.800: 99.1197% ( 2) 00:07:21.759 19660.800 - 19761.625: 99.1527% ( 3) 00:07:21.759 19761.625 - 19862.449: 99.1967% ( 4) 00:07:21.759 19862.449 - 19963.274: 99.2298% ( 3) 00:07:21.759 19963.274 - 20064.098: 99.2518% ( 2) 00:07:21.759 20064.098 - 20164.923: 99.2848% ( 3) 00:07:21.759 20164.923 - 20265.748: 99.2958% ( 1) 00:07:21.759 23088.837 - 23189.662: 99.3288% ( 3) 00:07:21.759 23189.662 - 23290.486: 99.3728% ( 4) 00:07:21.759 23290.486 - 23391.311: 99.4168% ( 4) 00:07:21.759 23391.311 - 23492.135: 99.4608% ( 4) 00:07:21.759 23492.135 - 23592.960: 99.4938% ( 3) 00:07:21.759 23592.960 - 23693.785: 99.5379% ( 4) 00:07:21.759 23693.785 - 23794.609: 99.5819% ( 4) 00:07:21.759 23794.609 - 23895.434: 99.6149% ( 3) 00:07:21.759 23895.434 - 23996.258: 99.6589% ( 4) 00:07:21.759 23996.258 - 24097.083: 99.6919% ( 3) 00:07:21.759 24097.083 - 24197.908: 99.7359% ( 4) 00:07:21.759 24197.908 - 24298.732: 99.7799% ( 4) 00:07:21.759 24298.732 - 24399.557: 99.8129% ( 3) 00:07:21.759 24399.557 - 24500.382: 99.8570% ( 4) 00:07:21.759 24500.382 - 24601.206: 99.9010% ( 4) 00:07:21.759 24601.206 - 24702.031: 99.9340% ( 3) 00:07:21.759 24702.031 - 24802.855: 99.9780% ( 4) 00:07:21.759 24802.855 - 24903.680: 100.0000% ( 2) 00:07:21.759 00:07:21.759 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:21.759 ============================================================================== 00:07:21.759 Range in us Cumulative IO count 00:07:21.759 5873.034 - 5898.240: 0.0220% ( 2) 00:07:21.759 5898.240 - 5923.446: 0.2751% ( 23) 00:07:21.759 5923.446 - 5948.652: 0.3631% ( 8) 00:07:21.759 5948.652 - 5973.858: 0.3851% ( 2) 00:07:21.759 5973.858 - 5999.065: 0.4181% ( 3) 00:07:21.759 5999.065 - 6024.271: 0.4842% ( 6) 00:07:21.759 6024.271 - 6049.477: 0.5502% ( 6) 00:07:21.759 6049.477 - 6074.683: 0.5942% ( 4) 00:07:21.759 6074.683 - 6099.889: 0.6602% ( 6) 00:07:21.759 6099.889 - 6125.095: 0.7262% ( 6) 00:07:21.759 6125.095 - 6150.302: 0.8033% ( 7) 00:07:21.759 6150.302 - 6175.508: 0.9023% ( 9) 00:07:21.759 6175.508 - 6200.714: 1.0013% ( 9) 00:07:21.759 6200.714 - 6225.920: 1.0783% ( 7) 00:07:21.759 6225.920 - 6251.126: 1.1444% ( 6) 00:07:21.759 6251.126 - 6276.332: 1.2324% ( 8) 00:07:21.759 6276.332 - 6301.538: 1.3204% ( 8) 00:07:21.759 6301.538 - 6326.745: 1.4085% ( 8) 00:07:21.759 6326.745 - 6351.951: 1.4855% ( 7) 00:07:21.759 6351.951 - 6377.157: 1.5735% ( 8) 00:07:21.759 6377.157 - 6402.363: 1.6835% ( 10) 00:07:21.759 6402.363 - 6427.569: 1.7606% ( 7) 00:07:21.759 6427.569 - 6452.775: 1.8376% ( 7) 00:07:21.759 6452.775 - 6503.188: 2.0026% ( 15) 00:07:21.759 6503.188 - 6553.600: 2.1897% ( 17) 00:07:21.759 6553.600 - 6604.012: 2.3438% ( 14) 00:07:21.759 6604.012 - 6654.425: 2.5198% ( 16) 00:07:21.759 6654.425 - 6704.837: 2.6188% ( 9) 00:07:21.759 6704.837 - 6755.249: 2.6629% ( 4) 00:07:21.759 6755.249 - 6805.662: 2.6959% ( 3) 00:07:21.759 6805.662 - 6856.074: 2.7399% ( 4) 00:07:21.759 6856.074 - 6906.486: 2.7839% ( 4) 00:07:21.759 6906.486 - 6956.898: 2.8389% ( 5) 00:07:21.759 6956.898 - 7007.311: 2.9379% ( 9) 00:07:21.759 7007.311 - 7057.723: 2.9710% ( 3) 00:07:21.759 7057.723 - 7108.135: 2.9930% ( 2) 00:07:21.759 7108.135 - 7158.548: 3.0150% ( 2) 00:07:21.759 7158.548 - 7208.960: 3.0370% ( 2) 00:07:21.759 7208.960 - 7259.372: 3.0590% ( 2) 00:07:21.759 7259.372 - 7309.785: 3.1030% ( 4) 00:07:21.759 7309.785 - 7360.197: 3.1470% ( 4) 00:07:21.759 7360.197 - 7410.609: 3.1690% ( 2) 00:07:21.759 7410.609 - 7461.022: 3.2130% ( 4) 00:07:21.759 7461.022 - 7511.434: 3.2460% ( 3) 00:07:21.759 7511.434 - 7561.846: 3.2680% ( 2) 00:07:21.759 7561.846 - 7612.258: 3.3011% ( 3) 00:07:21.759 7612.258 - 7662.671: 3.3781% ( 7) 00:07:21.759 7662.671 - 7713.083: 3.4551% ( 7) 00:07:21.759 7713.083 - 7763.495: 3.5431% ( 8) 00:07:21.759 7763.495 - 7813.908: 3.6532% ( 10) 00:07:21.759 7813.908 - 7864.320: 3.8182% ( 15) 00:07:21.759 7864.320 - 7914.732: 3.9613% ( 13) 00:07:21.759 7914.732 - 7965.145: 4.0933% ( 12) 00:07:21.759 7965.145 - 8015.557: 4.1923% ( 9) 00:07:21.759 8015.557 - 8065.969: 4.3024% ( 10) 00:07:21.759 8065.969 - 8116.382: 4.4234% ( 11) 00:07:21.759 8116.382 - 8166.794: 4.5445% ( 11) 00:07:21.759 8166.794 - 8217.206: 4.6545% ( 10) 00:07:21.759 8217.206 - 8267.618: 4.7645% ( 10) 00:07:21.759 8267.618 - 8318.031: 4.8636% ( 9) 00:07:21.759 8318.031 - 8368.443: 4.9846% ( 11) 00:07:21.759 8368.443 - 8418.855: 5.0946% ( 10) 00:07:21.759 8418.855 - 8469.268: 5.1827% ( 8) 00:07:21.759 8469.268 - 8519.680: 5.2707% ( 8) 00:07:21.759 8519.680 - 8570.092: 5.3587% ( 8) 00:07:21.759 8570.092 - 8620.505: 5.4577% ( 9) 00:07:21.759 8620.505 - 8670.917: 5.5788% ( 11) 00:07:21.759 8670.917 - 8721.329: 5.6778% ( 9) 00:07:21.759 8721.329 - 8771.742: 5.7438% ( 6) 00:07:21.759 8771.742 - 8822.154: 5.8099% ( 6) 00:07:21.759 8822.154 - 8872.566: 5.8869% ( 7) 00:07:21.759 8872.566 - 8922.978: 5.9309% ( 4) 00:07:21.759 8922.978 - 8973.391: 5.9639% ( 3) 00:07:21.759 8973.391 - 9023.803: 5.9969% ( 3) 00:07:21.759 9023.803 - 9074.215: 6.0189% ( 2) 00:07:21.759 9074.215 - 9124.628: 6.0409% ( 2) 00:07:21.759 9124.628 - 9175.040: 6.0739% ( 3) 00:07:21.759 9175.040 - 9225.452: 6.1180% ( 4) 00:07:21.759 9225.452 - 9275.865: 6.1840% ( 6) 00:07:21.759 9275.865 - 9326.277: 6.2610% ( 7) 00:07:21.759 9326.277 - 9376.689: 6.3380% ( 7) 00:07:21.759 9376.689 - 9427.102: 6.3930% ( 5) 00:07:21.759 9427.102 - 9477.514: 6.4591% ( 6) 00:07:21.759 9477.514 - 9527.926: 6.5251% ( 6) 00:07:21.759 9527.926 - 9578.338: 6.6241% ( 9) 00:07:21.759 9578.338 - 9628.751: 6.7011% ( 7) 00:07:21.759 9628.751 - 9679.163: 6.8002% ( 9) 00:07:21.759 9679.163 - 9729.575: 6.8662% ( 6) 00:07:21.759 9729.575 - 9779.988: 6.9212% ( 5) 00:07:21.759 9779.988 - 9830.400: 6.9652% ( 4) 00:07:21.759 9830.400 - 9880.812: 7.0973% ( 12) 00:07:21.759 9880.812 - 9931.225: 7.1853% ( 8) 00:07:21.759 9931.225 - 9981.637: 7.2733% ( 8) 00:07:21.760 9981.637 - 10032.049: 7.3614% ( 8) 00:07:21.760 10032.049 - 10082.462: 7.4604% ( 9) 00:07:21.760 10082.462 - 10132.874: 7.5594% ( 9) 00:07:21.760 10132.874 - 10183.286: 7.6585% ( 9) 00:07:21.760 10183.286 - 10233.698: 7.7465% ( 8) 00:07:21.760 10233.698 - 10284.111: 7.8015% ( 5) 00:07:21.760 10284.111 - 10334.523: 7.8675% ( 6) 00:07:21.760 10334.523 - 10384.935: 7.9225% ( 5) 00:07:21.760 10384.935 - 10435.348: 7.9886% ( 6) 00:07:21.760 10435.348 - 10485.760: 8.0436% ( 5) 00:07:21.760 10485.760 - 10536.172: 8.1096% ( 6) 00:07:21.760 10536.172 - 10586.585: 8.1756% ( 6) 00:07:21.760 10586.585 - 10636.997: 8.2416% ( 6) 00:07:21.760 10636.997 - 10687.409: 8.2967% ( 5) 00:07:21.760 10687.409 - 10737.822: 8.3517% ( 5) 00:07:21.760 10737.822 - 10788.234: 8.3957% ( 4) 00:07:21.760 10788.234 - 10838.646: 8.4287% ( 3) 00:07:21.760 10838.646 - 10889.058: 8.4507% ( 2) 00:07:21.760 10889.058 - 10939.471: 8.5607% ( 10) 00:07:21.760 10939.471 - 10989.883: 8.6378% ( 7) 00:07:21.760 10989.883 - 11040.295: 8.6818% ( 4) 00:07:21.760 11040.295 - 11090.708: 8.7368% ( 5) 00:07:21.760 11090.708 - 11141.120: 8.9018% ( 15) 00:07:21.760 11141.120 - 11191.532: 8.9899% ( 8) 00:07:21.760 11191.532 - 11241.945: 9.0889% ( 9) 00:07:21.760 11241.945 - 11292.357: 9.2210% ( 12) 00:07:21.760 11292.357 - 11342.769: 9.3750% ( 14) 00:07:21.760 11342.769 - 11393.182: 9.5401% ( 15) 00:07:21.760 11393.182 - 11443.594: 9.7601% ( 20) 00:07:21.760 11443.594 - 11494.006: 10.0352% ( 25) 00:07:21.760 11494.006 - 11544.418: 10.3323% ( 27) 00:07:21.760 11544.418 - 11594.831: 10.6074% ( 25) 00:07:21.760 11594.831 - 11645.243: 10.9595% ( 32) 00:07:21.760 11645.243 - 11695.655: 11.3996% ( 40) 00:07:21.760 11695.655 - 11746.068: 11.7628% ( 33) 00:07:21.760 11746.068 - 11796.480: 12.1809% ( 38) 00:07:21.760 11796.480 - 11846.892: 12.5880% ( 37) 00:07:21.760 11846.892 - 11897.305: 12.9621% ( 34) 00:07:21.760 11897.305 - 11947.717: 13.3033% ( 31) 00:07:21.760 11947.717 - 11998.129: 13.6224% ( 29) 00:07:21.760 11998.129 - 12048.542: 13.9525% ( 30) 00:07:21.760 12048.542 - 12098.954: 14.3046% ( 32) 00:07:21.760 12098.954 - 12149.366: 14.7117% ( 37) 00:07:21.760 12149.366 - 12199.778: 15.2179% ( 46) 00:07:21.760 12199.778 - 12250.191: 15.6800% ( 42) 00:07:21.760 12250.191 - 12300.603: 16.1862% ( 46) 00:07:21.760 12300.603 - 12351.015: 16.7804% ( 54) 00:07:21.760 12351.015 - 12401.428: 17.2975% ( 47) 00:07:21.760 12401.428 - 12451.840: 17.7707% ( 43) 00:07:21.760 12451.840 - 12502.252: 18.3099% ( 49) 00:07:21.760 12502.252 - 12552.665: 18.7610% ( 41) 00:07:21.760 12552.665 - 12603.077: 19.2672% ( 46) 00:07:21.760 12603.077 - 12653.489: 19.7733% ( 46) 00:07:21.760 12653.489 - 12703.902: 20.3895% ( 56) 00:07:21.760 12703.902 - 12754.314: 21.0827% ( 63) 00:07:21.760 12754.314 - 12804.726: 21.8970% ( 74) 00:07:21.760 12804.726 - 12855.138: 22.7773% ( 80) 00:07:21.760 12855.138 - 12905.551: 23.6686% ( 81) 00:07:21.760 12905.551 - 13006.375: 25.4071% ( 158) 00:07:21.760 13006.375 - 13107.200: 27.1017% ( 154) 00:07:21.760 13107.200 - 13208.025: 28.9393% ( 167) 00:07:21.760 13208.025 - 13308.849: 30.8649% ( 175) 00:07:21.760 13308.849 - 13409.674: 32.8785% ( 183) 00:07:21.760 13409.674 - 13510.498: 34.9142% ( 185) 00:07:21.760 13510.498 - 13611.323: 37.0268% ( 192) 00:07:21.760 13611.323 - 13712.148: 39.2936% ( 206) 00:07:21.760 13712.148 - 13812.972: 41.4062% ( 192) 00:07:21.760 13812.972 - 13913.797: 43.6730% ( 206) 00:07:21.760 13913.797 - 14014.622: 45.9837% ( 210) 00:07:21.760 14014.622 - 14115.446: 48.3495% ( 215) 00:07:21.760 14115.446 - 14216.271: 50.6052% ( 205) 00:07:21.760 14216.271 - 14317.095: 52.8279% ( 202) 00:07:21.760 14317.095 - 14417.920: 54.7865% ( 178) 00:07:21.760 14417.920 - 14518.745: 56.7011% ( 174) 00:07:21.760 14518.745 - 14619.569: 58.4837% ( 162) 00:07:21.760 14619.569 - 14720.394: 60.1452% ( 151) 00:07:21.760 14720.394 - 14821.218: 61.9058% ( 160) 00:07:21.760 14821.218 - 14922.043: 63.5783% ( 152) 00:07:21.760 14922.043 - 15022.868: 65.1849% ( 146) 00:07:21.760 15022.868 - 15123.692: 66.7364% ( 141) 00:07:21.760 15123.692 - 15224.517: 68.5629% ( 166) 00:07:21.760 15224.517 - 15325.342: 70.4225% ( 169) 00:07:21.760 15325.342 - 15426.166: 72.0951% ( 152) 00:07:21.760 15426.166 - 15526.991: 73.8556% ( 160) 00:07:21.760 15526.991 - 15627.815: 75.6272% ( 161) 00:07:21.760 15627.815 - 15728.640: 77.4538% ( 166) 00:07:21.760 15728.640 - 15829.465: 79.0933% ( 149) 00:07:21.760 15829.465 - 15930.289: 80.6448% ( 141) 00:07:21.760 15930.289 - 16031.114: 82.1413% ( 136) 00:07:21.760 16031.114 - 16131.938: 83.6268% ( 135) 00:07:21.760 16131.938 - 16232.763: 84.8922% ( 115) 00:07:21.760 16232.763 - 16333.588: 85.9155% ( 93) 00:07:21.760 16333.588 - 16434.412: 86.9718% ( 96) 00:07:21.760 16434.412 - 16535.237: 87.9952% ( 93) 00:07:21.760 16535.237 - 16636.062: 89.0185% ( 93) 00:07:21.760 16636.062 - 16736.886: 89.9978% ( 89) 00:07:21.760 16736.886 - 16837.711: 90.8011% ( 73) 00:07:21.760 16837.711 - 16938.535: 91.4723% ( 61) 00:07:21.760 16938.535 - 17039.360: 92.0445% ( 52) 00:07:21.760 17039.360 - 17140.185: 92.6056% ( 51) 00:07:21.760 17140.185 - 17241.009: 93.1668% ( 51) 00:07:21.760 17241.009 - 17341.834: 93.6950% ( 48) 00:07:21.760 17341.834 - 17442.658: 94.1681% ( 43) 00:07:21.760 17442.658 - 17543.483: 94.5423% ( 34) 00:07:21.760 17543.483 - 17644.308: 95.0594% ( 47) 00:07:21.760 17644.308 - 17745.132: 95.4445% ( 35) 00:07:21.760 17745.132 - 17845.957: 95.7636% ( 29) 00:07:21.760 17845.957 - 17946.782: 96.0497% ( 26) 00:07:21.760 17946.782 - 18047.606: 96.3468% ( 27) 00:07:21.760 18047.606 - 18148.431: 96.6329% ( 26) 00:07:21.760 18148.431 - 18249.255: 96.9190% ( 26) 00:07:21.760 18249.255 - 18350.080: 97.1941% ( 25) 00:07:21.760 18350.080 - 18450.905: 97.5132% ( 29) 00:07:21.760 18450.905 - 18551.729: 97.7333% ( 20) 00:07:21.760 18551.729 - 18652.554: 97.8653% ( 12) 00:07:21.760 18652.554 - 18753.378: 98.0084% ( 13) 00:07:21.760 18753.378 - 18854.203: 98.1184% ( 10) 00:07:21.760 18854.203 - 18955.028: 98.3275% ( 19) 00:07:21.760 18955.028 - 19055.852: 98.4705% ( 13) 00:07:21.760 19055.852 - 19156.677: 98.5915% ( 11) 00:07:21.760 19156.677 - 19257.502: 98.7456% ( 14) 00:07:21.760 19257.502 - 19358.326: 98.8556% ( 10) 00:07:21.760 19358.326 - 19459.151: 98.9987% ( 13) 00:07:21.760 19459.151 - 19559.975: 99.0647% ( 6) 00:07:21.760 19559.975 - 19660.800: 99.1417% ( 7) 00:07:21.760 19660.800 - 19761.625: 99.2077% ( 6) 00:07:21.760 19761.625 - 19862.449: 99.2738% ( 6) 00:07:21.760 19862.449 - 19963.274: 99.2958% ( 2) 00:07:21.760 21475.643 - 21576.468: 99.3288% ( 3) 00:07:21.760 21576.468 - 21677.292: 99.3728% ( 4) 00:07:21.760 21677.292 - 21778.117: 99.4168% ( 4) 00:07:21.760 21778.117 - 21878.942: 99.4498% ( 3) 00:07:21.760 21878.942 - 21979.766: 99.4938% ( 4) 00:07:21.760 21979.766 - 22080.591: 99.5379% ( 4) 00:07:21.760 22080.591 - 22181.415: 99.5709% ( 3) 00:07:21.760 22181.415 - 22282.240: 99.6149% ( 4) 00:07:21.760 22282.240 - 22383.065: 99.6589% ( 4) 00:07:21.760 22383.065 - 22483.889: 99.6919% ( 3) 00:07:21.760 22483.889 - 22584.714: 99.7359% ( 4) 00:07:21.760 22584.714 - 22685.538: 99.7799% ( 4) 00:07:21.760 22685.538 - 22786.363: 99.8129% ( 3) 00:07:21.760 22786.363 - 22887.188: 99.8460% ( 3) 00:07:21.760 22887.188 - 22988.012: 99.8900% ( 4) 00:07:21.760 22988.012 - 23088.837: 99.9340% ( 4) 00:07:21.760 23088.837 - 23189.662: 99.9670% ( 3) 00:07:21.760 23189.662 - 23290.486: 100.0000% ( 3) 00:07:21.760 00:07:21.760 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:21.760 ============================================================================== 00:07:21.760 Range in us Cumulative IO count 00:07:21.760 5822.622 - 5847.828: 0.0440% ( 4) 00:07:21.760 5847.828 - 5873.034: 0.1320% ( 8) 00:07:21.760 5873.034 - 5898.240: 0.1761% ( 4) 00:07:21.760 5898.240 - 5923.446: 0.2311% ( 5) 00:07:21.760 5923.446 - 5948.652: 0.2971% ( 6) 00:07:21.760 5948.652 - 5973.858: 0.3411% ( 4) 00:07:21.760 5973.858 - 5999.065: 0.4181% ( 7) 00:07:21.760 5999.065 - 6024.271: 0.4952% ( 7) 00:07:21.760 6024.271 - 6049.477: 0.5612% ( 6) 00:07:21.760 6049.477 - 6074.683: 0.6162% ( 5) 00:07:21.760 6074.683 - 6099.889: 0.6932% ( 7) 00:07:21.760 6099.889 - 6125.095: 0.7812% ( 8) 00:07:21.760 6125.095 - 6150.302: 0.8803% ( 9) 00:07:21.760 6150.302 - 6175.508: 0.9573% ( 7) 00:07:21.760 6175.508 - 6200.714: 1.0563% ( 9) 00:07:21.760 6200.714 - 6225.920: 1.1444% ( 8) 00:07:21.760 6225.920 - 6251.126: 1.2104% ( 6) 00:07:21.760 6251.126 - 6276.332: 1.2984% ( 8) 00:07:21.760 6276.332 - 6301.538: 1.3754% ( 7) 00:07:21.760 6301.538 - 6326.745: 1.4525% ( 7) 00:07:21.760 6326.745 - 6351.951: 1.5405% ( 8) 00:07:21.760 6351.951 - 6377.157: 1.6175% ( 7) 00:07:21.760 6377.157 - 6402.363: 1.6945% ( 7) 00:07:21.760 6402.363 - 6427.569: 1.7826% ( 8) 00:07:21.760 6427.569 - 6452.775: 1.8706% ( 8) 00:07:21.760 6452.775 - 6503.188: 2.0467% ( 16) 00:07:21.760 6503.188 - 6553.600: 2.2227% ( 16) 00:07:21.760 6553.600 - 6604.012: 2.3878% ( 15) 00:07:21.760 6604.012 - 6654.425: 2.5088% ( 11) 00:07:21.760 6654.425 - 6704.837: 2.6408% ( 12) 00:07:21.760 6704.837 - 6755.249: 2.6959% ( 5) 00:07:21.760 6755.249 - 6805.662: 2.8389% ( 13) 00:07:21.760 6805.662 - 6856.074: 2.9049% ( 6) 00:07:21.760 6856.074 - 6906.486: 2.9599% ( 5) 00:07:21.760 6906.486 - 6956.898: 3.0040% ( 4) 00:07:21.760 6956.898 - 7007.311: 3.0370% ( 3) 00:07:21.760 7007.311 - 7057.723: 3.0700% ( 3) 00:07:21.760 7057.723 - 7108.135: 3.1030% ( 3) 00:07:21.760 7108.135 - 7158.548: 3.1360% ( 3) 00:07:21.760 7158.548 - 7208.960: 3.1690% ( 3) 00:07:21.760 7208.960 - 7259.372: 3.2020% ( 3) 00:07:21.760 7259.372 - 7309.785: 3.2350% ( 3) 00:07:21.760 7309.785 - 7360.197: 3.2790% ( 4) 00:07:21.760 7360.197 - 7410.609: 3.3121% ( 3) 00:07:21.760 7410.609 - 7461.022: 3.3451% ( 3) 00:07:21.760 7461.022 - 7511.434: 3.4111% ( 6) 00:07:21.760 7511.434 - 7561.846: 3.4881% ( 7) 00:07:21.760 7561.846 - 7612.258: 3.5541% ( 6) 00:07:21.761 7612.258 - 7662.671: 3.6202% ( 6) 00:07:21.761 7662.671 - 7713.083: 3.6862% ( 6) 00:07:21.761 7713.083 - 7763.495: 3.7852% ( 9) 00:07:21.761 7763.495 - 7813.908: 3.8842% ( 9) 00:07:21.761 7813.908 - 7864.320: 3.9833% ( 9) 00:07:21.761 7864.320 - 7914.732: 4.0493% ( 6) 00:07:21.761 7914.732 - 7965.145: 4.1043% ( 5) 00:07:21.761 7965.145 - 8015.557: 4.1703% ( 6) 00:07:21.761 8015.557 - 8065.969: 4.2474% ( 7) 00:07:21.761 8065.969 - 8116.382: 4.3134% ( 6) 00:07:21.761 8116.382 - 8166.794: 4.3904% ( 7) 00:07:21.761 8166.794 - 8217.206: 4.5335% ( 13) 00:07:21.761 8217.206 - 8267.618: 4.6655% ( 12) 00:07:21.761 8267.618 - 8318.031: 4.7425% ( 7) 00:07:21.761 8318.031 - 8368.443: 4.8526% ( 10) 00:07:21.761 8368.443 - 8418.855: 4.9846% ( 12) 00:07:21.761 8418.855 - 8469.268: 5.1056% ( 11) 00:07:21.761 8469.268 - 8519.680: 5.2047% ( 9) 00:07:21.761 8519.680 - 8570.092: 5.3257% ( 11) 00:07:21.761 8570.092 - 8620.505: 5.4467% ( 11) 00:07:21.761 8620.505 - 8670.917: 5.5788% ( 12) 00:07:21.761 8670.917 - 8721.329: 5.6778% ( 9) 00:07:21.761 8721.329 - 8771.742: 5.7328% ( 5) 00:07:21.761 8771.742 - 8822.154: 5.7879% ( 5) 00:07:21.761 8822.154 - 8872.566: 5.8539% ( 6) 00:07:21.761 8872.566 - 8922.978: 5.9199% ( 6) 00:07:21.761 8922.978 - 8973.391: 5.9969% ( 7) 00:07:21.761 8973.391 - 9023.803: 6.0299% ( 3) 00:07:21.761 9023.803 - 9074.215: 6.0629% ( 3) 00:07:21.761 9074.215 - 9124.628: 6.0849% ( 2) 00:07:21.761 9124.628 - 9175.040: 6.1180% ( 3) 00:07:21.761 9175.040 - 9225.452: 6.1510% ( 3) 00:07:21.761 9225.452 - 9275.865: 6.1840% ( 3) 00:07:21.761 9275.865 - 9326.277: 6.2170% ( 3) 00:07:21.761 9326.277 - 9376.689: 6.2390% ( 2) 00:07:21.761 9376.689 - 9427.102: 6.2720% ( 3) 00:07:21.761 9427.102 - 9477.514: 6.3050% ( 3) 00:07:21.761 9477.514 - 9527.926: 6.3270% ( 2) 00:07:21.761 9527.926 - 9578.338: 6.3820% ( 5) 00:07:21.761 9578.338 - 9628.751: 6.4481% ( 6) 00:07:21.761 9628.751 - 9679.163: 6.5141% ( 6) 00:07:21.761 9679.163 - 9729.575: 6.6241% ( 10) 00:07:21.761 9729.575 - 9779.988: 6.7121% ( 8) 00:07:21.761 9779.988 - 9830.400: 6.7892% ( 7) 00:07:21.761 9830.400 - 9880.812: 6.8882% ( 9) 00:07:21.761 9880.812 - 9931.225: 6.9872% ( 9) 00:07:21.761 9931.225 - 9981.637: 7.0973% ( 10) 00:07:21.761 9981.637 - 10032.049: 7.2073% ( 10) 00:07:21.761 10032.049 - 10082.462: 7.3063% ( 9) 00:07:21.761 10082.462 - 10132.874: 7.3944% ( 8) 00:07:21.761 10132.874 - 10183.286: 7.4934% ( 9) 00:07:21.761 10183.286 - 10233.698: 7.5924% ( 9) 00:07:21.761 10233.698 - 10284.111: 7.6915% ( 9) 00:07:21.761 10284.111 - 10334.523: 7.7905% ( 9) 00:07:21.761 10334.523 - 10384.935: 7.8895% ( 9) 00:07:21.761 10384.935 - 10435.348: 7.9996% ( 10) 00:07:21.761 10435.348 - 10485.760: 8.0986% ( 9) 00:07:21.761 10485.760 - 10536.172: 8.1976% ( 9) 00:07:21.761 10536.172 - 10586.585: 8.2636% ( 6) 00:07:21.761 10586.585 - 10636.997: 8.3407% ( 7) 00:07:21.761 10636.997 - 10687.409: 8.4287% ( 8) 00:07:21.761 10687.409 - 10737.822: 8.4947% ( 6) 00:07:21.761 10737.822 - 10788.234: 8.5607% ( 6) 00:07:21.761 10788.234 - 10838.646: 8.6598% ( 9) 00:07:21.761 10838.646 - 10889.058: 8.7588% ( 9) 00:07:21.761 10889.058 - 10939.471: 8.8468% ( 8) 00:07:21.761 10939.471 - 10989.883: 8.9569% ( 10) 00:07:21.761 10989.883 - 11040.295: 9.0559% ( 9) 00:07:21.761 11040.295 - 11090.708: 9.1439% ( 8) 00:07:21.761 11090.708 - 11141.120: 9.2540% ( 10) 00:07:21.761 11141.120 - 11191.532: 9.3420% ( 8) 00:07:21.761 11191.532 - 11241.945: 9.4850% ( 13) 00:07:21.761 11241.945 - 11292.357: 9.5841% ( 9) 00:07:21.761 11292.357 - 11342.769: 9.7051% ( 11) 00:07:21.761 11342.769 - 11393.182: 9.9582% ( 23) 00:07:21.761 11393.182 - 11443.594: 10.1122% ( 14) 00:07:21.761 11443.594 - 11494.006: 10.2663% ( 14) 00:07:21.761 11494.006 - 11544.418: 10.5304% ( 24) 00:07:21.761 11544.418 - 11594.831: 10.8055% ( 25) 00:07:21.761 11594.831 - 11645.243: 11.0585% ( 23) 00:07:21.761 11645.243 - 11695.655: 11.3226% ( 24) 00:07:21.761 11695.655 - 11746.068: 11.5867% ( 24) 00:07:21.761 11746.068 - 11796.480: 11.8288% ( 22) 00:07:21.761 11796.480 - 11846.892: 12.1259% ( 27) 00:07:21.761 11846.892 - 11897.305: 12.4450% ( 29) 00:07:21.761 11897.305 - 11947.717: 12.7311% ( 26) 00:07:21.761 11947.717 - 11998.129: 13.0172% ( 26) 00:07:21.761 11998.129 - 12048.542: 13.3583% ( 31) 00:07:21.761 12048.542 - 12098.954: 13.6664% ( 28) 00:07:21.761 12098.954 - 12149.366: 14.0955% ( 39) 00:07:21.761 12149.366 - 12199.778: 14.4916% ( 36) 00:07:21.761 12199.778 - 12250.191: 14.8988% ( 37) 00:07:21.761 12250.191 - 12300.603: 15.4379% ( 49) 00:07:21.761 12300.603 - 12351.015: 15.9991% ( 51) 00:07:21.761 12351.015 - 12401.428: 16.5713% ( 52) 00:07:21.761 12401.428 - 12451.840: 17.1215% ( 50) 00:07:21.761 12451.840 - 12502.252: 17.6386% ( 47) 00:07:21.761 12502.252 - 12552.665: 18.1888% ( 50) 00:07:21.761 12552.665 - 12603.077: 18.7720% ( 53) 00:07:21.761 12603.077 - 12653.489: 19.3992% ( 57) 00:07:21.761 12653.489 - 12703.902: 20.0924% ( 63) 00:07:21.761 12703.902 - 12754.314: 20.8297% ( 67) 00:07:21.761 12754.314 - 12804.726: 21.7760% ( 86) 00:07:21.761 12804.726 - 12855.138: 22.6122% ( 76) 00:07:21.761 12855.138 - 12905.551: 23.6026% ( 90) 00:07:21.761 12905.551 - 13006.375: 25.5282% ( 175) 00:07:21.761 13006.375 - 13107.200: 27.7289% ( 200) 00:07:21.761 13107.200 - 13208.025: 29.8526% ( 193) 00:07:21.761 13208.025 - 13308.849: 32.3283% ( 225) 00:07:21.761 13308.849 - 13409.674: 34.7161% ( 217) 00:07:21.761 13409.674 - 13510.498: 37.0489% ( 212) 00:07:21.761 13510.498 - 13611.323: 39.3926% ( 213) 00:07:21.761 13611.323 - 13712.148: 41.6593% ( 206) 00:07:21.761 13712.148 - 13812.972: 43.8160% ( 196) 00:07:21.761 13812.972 - 13913.797: 45.9617% ( 195) 00:07:21.761 13913.797 - 14014.622: 47.8763% ( 174) 00:07:21.761 14014.622 - 14115.446: 49.5709% ( 154) 00:07:21.761 14115.446 - 14216.271: 51.2434% ( 152) 00:07:21.761 14216.271 - 14317.095: 52.6518% ( 128) 00:07:21.761 14317.095 - 14417.920: 54.2804% ( 148) 00:07:21.761 14417.920 - 14518.745: 55.8649% ( 144) 00:07:21.761 14518.745 - 14619.569: 57.3504% ( 135) 00:07:21.761 14619.569 - 14720.394: 58.8358% ( 135) 00:07:21.761 14720.394 - 14821.218: 60.6624% ( 166) 00:07:21.761 14821.218 - 14922.043: 62.4890% ( 166) 00:07:21.761 14922.043 - 15022.868: 64.4806% ( 181) 00:07:21.761 15022.868 - 15123.692: 66.5273% ( 186) 00:07:21.761 15123.692 - 15224.517: 68.3979% ( 170) 00:07:21.761 15224.517 - 15325.342: 70.3125% ( 174) 00:07:21.761 15325.342 - 15426.166: 72.1611% ( 168) 00:07:21.761 15426.166 - 15526.991: 74.2077% ( 186) 00:07:21.761 15526.991 - 15627.815: 76.1114% ( 173) 00:07:21.761 15627.815 - 15728.640: 77.9379% ( 166) 00:07:21.761 15728.640 - 15829.465: 79.5445% ( 146) 00:07:21.761 15829.465 - 15930.289: 81.0629% ( 138) 00:07:21.761 15930.289 - 16031.114: 82.5154% ( 132) 00:07:21.761 16031.114 - 16131.938: 83.7698% ( 114) 00:07:21.761 16131.938 - 16232.763: 85.1342% ( 124) 00:07:21.761 16232.763 - 16333.588: 86.2456% ( 101) 00:07:21.761 16333.588 - 16434.412: 87.2579% ( 92) 00:07:21.761 16434.412 - 16535.237: 88.1272% ( 79) 00:07:21.761 16535.237 - 16636.062: 88.9195% ( 72) 00:07:21.761 16636.062 - 16736.886: 89.7227% ( 73) 00:07:21.761 16736.886 - 16837.711: 90.5150% ( 72) 00:07:21.761 16837.711 - 16938.535: 91.2302% ( 65) 00:07:21.761 16938.535 - 17039.360: 91.8244% ( 54) 00:07:21.761 17039.360 - 17140.185: 92.3746% ( 50) 00:07:21.761 17140.185 - 17241.009: 92.9137% ( 49) 00:07:21.761 17241.009 - 17341.834: 93.4199% ( 46) 00:07:21.761 17341.834 - 17442.658: 94.0471% ( 57) 00:07:21.761 17442.658 - 17543.483: 94.5863% ( 49) 00:07:21.761 17543.483 - 17644.308: 94.8944% ( 28) 00:07:21.761 17644.308 - 17745.132: 95.2355% ( 31) 00:07:21.761 17745.132 - 17845.957: 95.5546% ( 29) 00:07:21.761 17845.957 - 17946.782: 95.8847% ( 30) 00:07:21.761 17946.782 - 18047.606: 96.3468% ( 42) 00:07:21.761 18047.606 - 18148.431: 96.7210% ( 34) 00:07:21.761 18148.431 - 18249.255: 97.0621% ( 31) 00:07:21.761 18249.255 - 18350.080: 97.3371% ( 25) 00:07:21.761 18350.080 - 18450.905: 97.5682% ( 21) 00:07:21.761 18450.905 - 18551.729: 97.7553% ( 17) 00:07:21.761 18551.729 - 18652.554: 97.9643% ( 19) 00:07:21.761 18652.554 - 18753.378: 98.1734% ( 19) 00:07:21.761 18753.378 - 18854.203: 98.3605% ( 17) 00:07:21.761 18854.203 - 18955.028: 98.5035% ( 13) 00:07:21.761 18955.028 - 19055.852: 98.5805% ( 7) 00:07:21.761 19055.852 - 19156.677: 98.6246% ( 4) 00:07:21.761 19156.677 - 19257.502: 98.6796% ( 5) 00:07:21.762 19257.502 - 19358.326: 98.7456% ( 6) 00:07:21.762 19358.326 - 19459.151: 98.7896% ( 4) 00:07:21.762 19459.151 - 19559.975: 98.8226% ( 3) 00:07:21.762 19559.975 - 19660.800: 98.8886% ( 6) 00:07:21.762 19660.800 - 19761.625: 98.9547% ( 6) 00:07:21.762 19761.625 - 19862.449: 99.0207% ( 6) 00:07:21.762 19862.449 - 19963.274: 99.1087% ( 8) 00:07:21.762 19963.274 - 20064.098: 99.2188% ( 10) 00:07:21.762 20064.098 - 20164.923: 99.3178% ( 9) 00:07:21.762 20164.923 - 20265.748: 99.4278% ( 10) 00:07:21.762 20265.748 - 20366.572: 99.4828% ( 5) 00:07:21.762 20366.572 - 20467.397: 99.5158% ( 3) 00:07:21.762 20467.397 - 20568.222: 99.5599% ( 4) 00:07:21.762 20568.222 - 20669.046: 99.6039% ( 4) 00:07:21.762 20669.046 - 20769.871: 99.6479% ( 4) 00:07:21.762 20769.871 - 20870.695: 99.6809% ( 3) 00:07:21.762 20870.695 - 20971.520: 99.7249% ( 4) 00:07:21.762 20971.520 - 21072.345: 99.7689% ( 4) 00:07:21.762 21072.345 - 21173.169: 99.8129% ( 4) 00:07:21.762 21173.169 - 21273.994: 99.8460% ( 3) 00:07:21.762 21273.994 - 21374.818: 99.8900% ( 4) 00:07:21.762 21374.818 - 21475.643: 99.9340% ( 4) 00:07:21.762 21475.643 - 21576.468: 99.9780% ( 4) 00:07:21.762 21576.468 - 21677.292: 100.0000% ( 2) 00:07:21.762 00:07:21.762 20:18:05 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:07:23.162 Initializing NVMe Controllers 00:07:23.162 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:23.162 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:23.162 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:23.162 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:23.162 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:23.162 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:23.162 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:23.162 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:23.162 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:23.162 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:23.162 Initialization complete. Launching workers. 00:07:23.162 ======================================================== 00:07:23.162 Latency(us) 00:07:23.162 Device Information : IOPS MiB/s Average min max 00:07:23.162 PCIE (0000:00:13.0) NSID 1 from core 0: 16530.87 193.72 7755.15 5789.66 31857.83 00:07:23.162 PCIE (0000:00:10.0) NSID 1 from core 0: 16530.87 193.72 7743.26 5804.11 30597.15 00:07:23.162 PCIE (0000:00:11.0) NSID 1 from core 0: 16530.87 193.72 7731.08 5922.47 28653.20 00:07:23.162 PCIE (0000:00:12.0) NSID 1 from core 0: 16530.87 193.72 7719.23 5823.98 26938.05 00:07:23.162 PCIE (0000:00:12.0) NSID 2 from core 0: 16530.87 193.72 7707.33 5817.98 25183.14 00:07:23.162 PCIE (0000:00:12.0) NSID 3 from core 0: 16594.70 194.47 7665.78 5847.53 20183.88 00:07:23.162 ======================================================== 00:07:23.162 Total : 99249.05 1163.07 7720.27 5789.66 31857.83 00:07:23.162 00:07:23.162 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:23.162 ================================================================================= 00:07:23.162 1.00000% : 6225.920us 00:07:23.162 10.00000% : 6503.188us 00:07:23.162 25.00000% : 6704.837us 00:07:23.162 50.00000% : 6956.898us 00:07:23.163 75.00000% : 7864.320us 00:07:23.163 90.00000% : 10233.698us 00:07:23.163 95.00000% : 11241.945us 00:07:23.163 98.00000% : 13308.849us 00:07:23.163 99.00000% : 14417.920us 00:07:23.163 99.50000% : 28029.243us 00:07:23.163 99.90000% : 31457.280us 00:07:23.163 99.99000% : 31860.578us 00:07:23.163 99.99900% : 31860.578us 00:07:23.163 99.99990% : 31860.578us 00:07:23.163 99.99999% : 31860.578us 00:07:23.163 00:07:23.163 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:23.163 ================================================================================= 00:07:23.163 1.00000% : 6099.889us 00:07:23.163 10.00000% : 6427.569us 00:07:23.163 25.00000% : 6654.425us 00:07:23.163 50.00000% : 7007.311us 00:07:23.163 75.00000% : 7864.320us 00:07:23.163 90.00000% : 10183.286us 00:07:23.163 95.00000% : 11342.769us 00:07:23.163 98.00000% : 13006.375us 00:07:23.163 99.00000% : 14821.218us 00:07:23.163 99.50000% : 25407.803us 00:07:23.163 99.90000% : 30247.385us 00:07:23.163 99.99000% : 30650.683us 00:07:23.163 99.99900% : 30650.683us 00:07:23.163 99.99990% : 30650.683us 00:07:23.163 99.99999% : 30650.683us 00:07:23.163 00:07:23.163 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:23.163 ================================================================================= 00:07:23.163 1.00000% : 6175.508us 00:07:23.163 10.00000% : 6503.188us 00:07:23.163 25.00000% : 6704.837us 00:07:23.163 50.00000% : 6956.898us 00:07:23.163 75.00000% : 7864.320us 00:07:23.163 90.00000% : 10132.874us 00:07:23.163 95.00000% : 11443.594us 00:07:23.163 98.00000% : 12855.138us 00:07:23.163 99.00000% : 15022.868us 00:07:23.163 99.50000% : 23693.785us 00:07:23.163 99.90000% : 28432.542us 00:07:23.163 99.99000% : 28634.191us 00:07:23.163 99.99900% : 28835.840us 00:07:23.163 99.99990% : 28835.840us 00:07:23.163 99.99999% : 28835.840us 00:07:23.163 00:07:23.163 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:23.163 ================================================================================= 00:07:23.163 1.00000% : 6150.302us 00:07:23.163 10.00000% : 6503.188us 00:07:23.163 25.00000% : 6704.837us 00:07:23.163 50.00000% : 6956.898us 00:07:23.163 75.00000% : 7864.320us 00:07:23.163 90.00000% : 10183.286us 00:07:23.163 95.00000% : 11645.243us 00:07:23.163 98.00000% : 13006.375us 00:07:23.163 99.00000% : 14417.920us 00:07:23.163 99.50000% : 22080.591us 00:07:23.163 99.90000% : 26617.698us 00:07:23.163 99.99000% : 27020.997us 00:07:23.163 99.99900% : 27020.997us 00:07:23.163 99.99990% : 27020.997us 00:07:23.163 99.99999% : 27020.997us 00:07:23.163 00:07:23.163 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:23.163 ================================================================================= 00:07:23.163 1.00000% : 6225.920us 00:07:23.163 10.00000% : 6503.188us 00:07:23.163 25.00000% : 6704.837us 00:07:23.163 50.00000% : 6956.898us 00:07:23.163 75.00000% : 7813.908us 00:07:23.163 90.00000% : 10233.698us 00:07:23.163 95.00000% : 11393.182us 00:07:23.163 98.00000% : 13510.498us 00:07:23.163 99.00000% : 14317.095us 00:07:23.163 99.50000% : 20366.572us 00:07:23.163 99.90000% : 24802.855us 00:07:23.163 99.99000% : 25206.154us 00:07:23.163 99.99900% : 25206.154us 00:07:23.163 99.99990% : 25206.154us 00:07:23.163 99.99999% : 25206.154us 00:07:23.163 00:07:23.163 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:23.163 ================================================================================= 00:07:23.163 1.00000% : 6200.714us 00:07:23.163 10.00000% : 6503.188us 00:07:23.163 25.00000% : 6704.837us 00:07:23.163 50.00000% : 6956.898us 00:07:23.163 75.00000% : 7864.320us 00:07:23.163 90.00000% : 10132.874us 00:07:23.163 95.00000% : 11241.945us 00:07:23.163 98.00000% : 13409.674us 00:07:23.163 99.00000% : 14417.920us 00:07:23.163 99.50000% : 15224.517us 00:07:23.163 99.90000% : 19862.449us 00:07:23.163 99.99000% : 20164.923us 00:07:23.163 99.99900% : 20265.748us 00:07:23.163 99.99990% : 20265.748us 00:07:23.163 99.99999% : 20265.748us 00:07:23.163 00:07:23.163 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:23.163 ============================================================================== 00:07:23.163 Range in us Cumulative IO count 00:07:23.163 5772.209 - 5797.415: 0.0060% ( 1) 00:07:23.163 5923.446 - 5948.652: 0.0181% ( 2) 00:07:23.163 5948.652 - 5973.858: 0.0362% ( 3) 00:07:23.163 5973.858 - 5999.065: 0.0664% ( 5) 00:07:23.163 5999.065 - 6024.271: 0.0965% ( 5) 00:07:23.163 6024.271 - 6049.477: 0.1267% ( 5) 00:07:23.163 6049.477 - 6074.683: 0.1629% ( 6) 00:07:23.163 6074.683 - 6099.889: 0.2051% ( 7) 00:07:23.163 6099.889 - 6125.095: 0.2956% ( 15) 00:07:23.163 6125.095 - 6150.302: 0.3982% ( 17) 00:07:23.163 6150.302 - 6175.508: 0.6636% ( 44) 00:07:23.163 6175.508 - 6200.714: 0.9110% ( 41) 00:07:23.163 6200.714 - 6225.920: 1.1704% ( 43) 00:07:23.163 6225.920 - 6251.126: 1.4479% ( 46) 00:07:23.163 6251.126 - 6276.332: 1.8581% ( 68) 00:07:23.163 6276.332 - 6301.538: 2.6725% ( 135) 00:07:23.163 6301.538 - 6326.745: 3.5594% ( 147) 00:07:23.163 6326.745 - 6351.951: 4.0842% ( 87) 00:07:23.163 6351.951 - 6377.157: 4.8142% ( 121) 00:07:23.163 6377.157 - 6402.363: 6.0087% ( 198) 00:07:23.163 6402.363 - 6427.569: 7.7763% ( 293) 00:07:23.163 6427.569 - 6452.775: 9.1819% ( 233) 00:07:23.163 6452.775 - 6503.188: 11.9872% ( 465) 00:07:23.163 6503.188 - 6553.600: 15.5224% ( 586) 00:07:23.163 6553.600 - 6604.012: 19.0939% ( 592) 00:07:23.163 6604.012 - 6654.425: 22.8644% ( 625) 00:07:23.163 6654.425 - 6704.837: 27.2563% ( 728) 00:07:23.163 6704.837 - 6755.249: 32.8125% ( 921) 00:07:23.163 6755.249 - 6805.662: 37.4879% ( 775) 00:07:23.163 6805.662 - 6856.074: 41.7531% ( 707) 00:07:23.163 6856.074 - 6906.486: 47.1344% ( 892) 00:07:23.163 6906.486 - 6956.898: 50.4645% ( 552) 00:07:23.163 6956.898 - 7007.311: 53.4085% ( 488) 00:07:23.163 7007.311 - 7057.723: 56.8171% ( 565) 00:07:23.163 7057.723 - 7108.135: 60.0265% ( 532) 00:07:23.163 7108.135 - 7158.548: 62.0174% ( 330) 00:07:23.163 7158.548 - 7208.960: 63.3627% ( 223) 00:07:23.163 7208.960 - 7259.372: 64.9493% ( 263) 00:07:23.163 7259.372 - 7309.785: 66.2705% ( 219) 00:07:23.163 7309.785 - 7360.197: 67.4710% ( 199) 00:07:23.163 7360.197 - 7410.609: 68.2613% ( 131) 00:07:23.163 7410.609 - 7461.022: 69.1844% ( 153) 00:07:23.163 7461.022 - 7511.434: 70.0109% ( 137) 00:07:23.163 7511.434 - 7561.846: 70.7408% ( 121) 00:07:23.163 7561.846 - 7612.258: 71.5553% ( 135) 00:07:23.163 7612.258 - 7662.671: 72.3395% ( 130) 00:07:23.163 7662.671 - 7713.083: 73.0333% ( 115) 00:07:23.163 7713.083 - 7763.495: 73.6064% ( 95) 00:07:23.163 7763.495 - 7813.908: 74.6018% ( 165) 00:07:23.163 7813.908 - 7864.320: 75.4887% ( 147) 00:07:23.163 7864.320 - 7914.732: 76.2126% ( 120) 00:07:23.163 7914.732 - 7965.145: 76.8279% ( 102) 00:07:23.163 7965.145 - 8015.557: 77.3528% ( 87) 00:07:23.163 8015.557 - 8065.969: 77.9319% ( 96) 00:07:23.163 8065.969 - 8116.382: 78.2457% ( 52) 00:07:23.163 8116.382 - 8166.794: 78.5594% ( 52) 00:07:23.163 8166.794 - 8217.206: 79.0058% ( 74) 00:07:23.163 8217.206 - 8267.618: 79.2592% ( 42) 00:07:23.163 8267.618 - 8318.031: 79.4703% ( 35) 00:07:23.163 8318.031 - 8368.443: 79.7177% ( 41) 00:07:23.163 8368.443 - 8418.855: 80.0012% ( 47) 00:07:23.163 8418.855 - 8469.268: 80.3209% ( 53) 00:07:23.163 8469.268 - 8519.680: 80.7131% ( 65) 00:07:23.163 8519.680 - 8570.092: 81.1716% ( 76) 00:07:23.163 8570.092 - 8620.505: 81.4732% ( 50) 00:07:23.163 8620.505 - 8670.917: 81.7206% ( 41) 00:07:23.163 8670.917 - 8721.329: 82.0343% ( 52) 00:07:23.163 8721.329 - 8771.742: 82.5652% ( 88) 00:07:23.163 8771.742 - 8822.154: 82.9935% ( 71) 00:07:23.163 8822.154 - 8872.566: 83.3675% ( 62) 00:07:23.163 8872.566 - 8922.978: 83.7174% ( 58) 00:07:23.163 8922.978 - 8973.391: 84.3328% ( 102) 00:07:23.163 8973.391 - 9023.803: 84.5801% ( 41) 00:07:23.163 9023.803 - 9074.215: 84.8033% ( 37) 00:07:23.163 9074.215 - 9124.628: 85.0929% ( 48) 00:07:23.163 9124.628 - 9175.040: 85.3644% ( 45) 00:07:23.163 9175.040 - 9225.452: 85.6057% ( 40) 00:07:23.163 9225.452 - 9275.865: 85.8892% ( 47) 00:07:23.163 9275.865 - 9326.277: 86.0039% ( 19) 00:07:23.163 9326.277 - 9376.689: 86.3296% ( 54) 00:07:23.163 9376.689 - 9427.102: 86.4865% ( 26) 00:07:23.163 9427.102 - 9477.514: 86.6192% ( 22) 00:07:23.163 9477.514 - 9527.926: 86.7640% ( 24) 00:07:23.163 9527.926 - 9578.338: 86.9510% ( 31) 00:07:23.163 9578.338 - 9628.751: 87.1622% ( 35) 00:07:23.163 9628.751 - 9679.163: 87.4638% ( 50) 00:07:23.163 9679.163 - 9729.575: 87.6026% ( 23) 00:07:23.163 9729.575 - 9779.988: 87.7232% ( 20) 00:07:23.163 9779.988 - 9830.400: 87.8439% ( 20) 00:07:23.163 9830.400 - 9880.812: 87.9706% ( 21) 00:07:23.163 9880.812 - 9931.225: 88.0972% ( 21) 00:07:23.163 9931.225 - 9981.637: 88.2541% ( 26) 00:07:23.163 9981.637 - 10032.049: 88.4472% ( 32) 00:07:23.163 10032.049 - 10082.462: 88.8996% ( 75) 00:07:23.163 10082.462 - 10132.874: 89.3460% ( 74) 00:07:23.163 10132.874 - 10183.286: 89.7442% ( 66) 00:07:23.163 10183.286 - 10233.698: 90.1484% ( 67) 00:07:23.163 10233.698 - 10284.111: 90.4621% ( 52) 00:07:23.163 10284.111 - 10334.523: 90.6431% ( 30) 00:07:23.163 10334.523 - 10384.935: 90.8120% ( 28) 00:07:23.163 10384.935 - 10435.348: 91.0232% ( 35) 00:07:23.163 10435.348 - 10485.760: 91.2464% ( 37) 00:07:23.163 10485.760 - 10536.172: 91.5299% ( 47) 00:07:23.163 10536.172 - 10586.585: 91.8497% ( 53) 00:07:23.164 10586.585 - 10636.997: 92.2418% ( 65) 00:07:23.164 10636.997 - 10687.409: 92.4831% ( 40) 00:07:23.164 10687.409 - 10737.822: 92.7546% ( 45) 00:07:23.164 10737.822 - 10788.234: 93.0381% ( 47) 00:07:23.164 10788.234 - 10838.646: 93.2734% ( 39) 00:07:23.164 10838.646 - 10889.058: 93.4846% ( 35) 00:07:23.164 10889.058 - 10939.471: 93.8103% ( 54) 00:07:23.164 10939.471 - 10989.883: 94.0275% ( 36) 00:07:23.164 10989.883 - 11040.295: 94.2387% ( 35) 00:07:23.164 11040.295 - 11090.708: 94.5463% ( 51) 00:07:23.164 11090.708 - 11141.120: 94.7153% ( 28) 00:07:23.164 11141.120 - 11191.532: 94.8842% ( 28) 00:07:23.164 11191.532 - 11241.945: 95.0591% ( 29) 00:07:23.164 11241.945 - 11292.357: 95.2160% ( 26) 00:07:23.164 11292.357 - 11342.769: 95.3728% ( 26) 00:07:23.164 11342.769 - 11393.182: 95.5417% ( 28) 00:07:23.164 11393.182 - 11443.594: 95.7046% ( 27) 00:07:23.164 11443.594 - 11494.006: 95.8675% ( 27) 00:07:23.164 11494.006 - 11544.418: 95.9580% ( 15) 00:07:23.164 11544.418 - 11594.831: 96.0304% ( 12) 00:07:23.164 11594.831 - 11645.243: 96.1209% ( 15) 00:07:23.164 11645.243 - 11695.655: 96.1631% ( 7) 00:07:23.164 11695.655 - 11746.068: 96.2114% ( 8) 00:07:23.164 11746.068 - 11796.480: 96.2838% ( 12) 00:07:23.164 11796.480 - 11846.892: 96.3501% ( 11) 00:07:23.164 11846.892 - 11897.305: 96.4587% ( 18) 00:07:23.164 11897.305 - 11947.717: 96.5673% ( 18) 00:07:23.164 11947.717 - 11998.129: 96.6759% ( 18) 00:07:23.164 11998.129 - 12048.542: 96.7905% ( 19) 00:07:23.164 12048.542 - 12098.954: 96.9293% ( 23) 00:07:23.164 12098.954 - 12149.366: 97.0681% ( 23) 00:07:23.164 12149.366 - 12199.778: 97.1465% ( 13) 00:07:23.164 12199.778 - 12250.191: 97.2068% ( 10) 00:07:23.164 12250.191 - 12300.603: 97.2551% ( 8) 00:07:23.164 12300.603 - 12351.015: 97.3094% ( 9) 00:07:23.164 12351.015 - 12401.428: 97.3516% ( 7) 00:07:23.164 12401.428 - 12451.840: 97.3878% ( 6) 00:07:23.164 12451.840 - 12502.252: 97.4300% ( 7) 00:07:23.164 12502.252 - 12552.665: 97.4722% ( 7) 00:07:23.164 12552.665 - 12603.077: 97.5145% ( 7) 00:07:23.164 12603.077 - 12653.489: 97.5507% ( 6) 00:07:23.164 12653.489 - 12703.902: 97.5688% ( 3) 00:07:23.164 12703.902 - 12754.314: 97.5869% ( 3) 00:07:23.164 12754.314 - 12804.726: 97.6050% ( 3) 00:07:23.164 12804.726 - 12855.138: 97.6170% ( 2) 00:07:23.164 12855.138 - 12905.551: 97.6351% ( 3) 00:07:23.164 12905.551 - 13006.375: 97.7075% ( 12) 00:07:23.164 13006.375 - 13107.200: 97.8041% ( 16) 00:07:23.164 13107.200 - 13208.025: 97.9126% ( 18) 00:07:23.164 13208.025 - 13308.849: 98.0755% ( 27) 00:07:23.164 13308.849 - 13409.674: 98.2143% ( 23) 00:07:23.164 13409.674 - 13510.498: 98.3349% ( 20) 00:07:23.164 13510.498 - 13611.323: 98.4315% ( 16) 00:07:23.164 13611.323 - 13712.148: 98.5039% ( 12) 00:07:23.164 13712.148 - 13812.972: 98.5944% ( 15) 00:07:23.164 13812.972 - 13913.797: 98.6667% ( 12) 00:07:23.164 13913.797 - 14014.622: 98.7391% ( 12) 00:07:23.164 14014.622 - 14115.446: 98.7995% ( 10) 00:07:23.164 14115.446 - 14216.271: 98.8658% ( 11) 00:07:23.164 14216.271 - 14317.095: 98.9382% ( 12) 00:07:23.164 14317.095 - 14417.920: 99.0046% ( 11) 00:07:23.164 14417.920 - 14518.745: 99.0589% ( 9) 00:07:23.164 14518.745 - 14619.569: 99.1071% ( 8) 00:07:23.164 14619.569 - 14720.394: 99.1313% ( 4) 00:07:23.164 14720.394 - 14821.218: 99.1554% ( 4) 00:07:23.164 14821.218 - 14922.043: 99.1856% ( 5) 00:07:23.164 14922.043 - 15022.868: 99.2097% ( 4) 00:07:23.164 15022.868 - 15123.692: 99.2278% ( 3) 00:07:23.164 26819.348 - 27020.997: 99.2821% ( 9) 00:07:23.164 27020.997 - 27222.646: 99.3545% ( 12) 00:07:23.164 27222.646 - 27424.295: 99.4028% ( 8) 00:07:23.164 27424.295 - 27625.945: 99.4510% ( 8) 00:07:23.164 27625.945 - 27827.594: 99.4993% ( 8) 00:07:23.164 27827.594 - 28029.243: 99.5475% ( 8) 00:07:23.164 28029.243 - 28230.892: 99.5958% ( 8) 00:07:23.164 28230.892 - 28432.542: 99.6139% ( 3) 00:07:23.164 29844.086 - 30045.735: 99.6682% ( 9) 00:07:23.164 30045.735 - 30247.385: 99.7949% ( 21) 00:07:23.164 30247.385 - 30449.034: 99.8612% ( 11) 00:07:23.164 31255.631 - 31457.280: 99.9035% ( 7) 00:07:23.164 31457.280 - 31658.929: 99.9517% ( 8) 00:07:23.164 31658.929 - 31860.578: 100.0000% ( 8) 00:07:23.164 00:07:23.164 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:23.164 ============================================================================== 00:07:23.164 Range in us Cumulative IO count 00:07:23.164 5797.415 - 5822.622: 0.0060% ( 1) 00:07:23.164 5822.622 - 5847.828: 0.0121% ( 1) 00:07:23.164 5873.034 - 5898.240: 0.0181% ( 1) 00:07:23.164 5898.240 - 5923.446: 0.0543% ( 6) 00:07:23.164 5923.446 - 5948.652: 0.0845% ( 5) 00:07:23.164 5948.652 - 5973.858: 0.1146% ( 5) 00:07:23.164 5973.858 - 5999.065: 0.2232% ( 18) 00:07:23.164 5999.065 - 6024.271: 0.3378% ( 19) 00:07:23.164 6024.271 - 6049.477: 0.5188% ( 30) 00:07:23.164 6049.477 - 6074.683: 0.7722% ( 42) 00:07:23.164 6074.683 - 6099.889: 1.0919% ( 53) 00:07:23.164 6099.889 - 6125.095: 1.4117% ( 53) 00:07:23.164 6125.095 - 6150.302: 1.8340% ( 70) 00:07:23.164 6150.302 - 6175.508: 2.2563% ( 70) 00:07:23.164 6175.508 - 6200.714: 2.7932% ( 89) 00:07:23.164 6200.714 - 6225.920: 3.3181% ( 87) 00:07:23.164 6225.920 - 6251.126: 3.9093% ( 98) 00:07:23.164 6251.126 - 6276.332: 4.9529% ( 173) 00:07:23.164 6276.332 - 6301.538: 5.8217% ( 144) 00:07:23.164 6301.538 - 6326.745: 6.7145% ( 148) 00:07:23.164 6326.745 - 6351.951: 7.6074% ( 148) 00:07:23.164 6351.951 - 6377.157: 8.5726% ( 160) 00:07:23.164 6377.157 - 6402.363: 9.7913% ( 202) 00:07:23.164 6402.363 - 6427.569: 10.9435% ( 191) 00:07:23.164 6427.569 - 6452.775: 12.1984% ( 208) 00:07:23.164 6452.775 - 6503.188: 14.6417% ( 405) 00:07:23.164 6503.188 - 6553.600: 18.1286% ( 578) 00:07:23.164 6553.600 - 6604.012: 21.9474% ( 633) 00:07:23.164 6604.012 - 6654.425: 26.4660% ( 749) 00:07:23.164 6654.425 - 6704.837: 30.5381% ( 675) 00:07:23.164 6704.837 - 6755.249: 34.1940% ( 606) 00:07:23.164 6755.249 - 6805.662: 37.6267% ( 569) 00:07:23.164 6805.662 - 6856.074: 41.2765% ( 605) 00:07:23.164 6856.074 - 6906.486: 44.5222% ( 538) 00:07:23.164 6906.486 - 6956.898: 47.7015% ( 527) 00:07:23.164 6956.898 - 7007.311: 50.6214% ( 484) 00:07:23.164 7007.311 - 7057.723: 53.4870% ( 475) 00:07:23.164 7057.723 - 7108.135: 56.1716% ( 445) 00:07:23.164 7108.135 - 7158.548: 58.6088% ( 404) 00:07:23.164 7158.548 - 7208.960: 60.5936% ( 329) 00:07:23.164 7208.960 - 7259.372: 62.4578% ( 309) 00:07:23.164 7259.372 - 7309.785: 64.1530% ( 281) 00:07:23.164 7309.785 - 7360.197: 65.4561% ( 216) 00:07:23.164 7360.197 - 7410.609: 67.0246% ( 260) 00:07:23.164 7410.609 - 7461.022: 68.5509% ( 253) 00:07:23.164 7461.022 - 7511.434: 69.9264% ( 228) 00:07:23.164 7511.434 - 7561.846: 70.9882% ( 176) 00:07:23.164 7561.846 - 7612.258: 71.7423% ( 125) 00:07:23.164 7612.258 - 7662.671: 72.4843% ( 123) 00:07:23.165 7662.671 - 7713.083: 73.2083% ( 120) 00:07:23.165 7713.083 - 7763.495: 73.8538% ( 107) 00:07:23.165 7763.495 - 7813.908: 74.5415% ( 114) 00:07:23.165 7813.908 - 7864.320: 75.0483% ( 84) 00:07:23.165 7864.320 - 7914.732: 75.6757% ( 104) 00:07:23.165 7914.732 - 7965.145: 76.1161% ( 73) 00:07:23.165 7965.145 - 8015.557: 76.6349% ( 86) 00:07:23.165 8015.557 - 8065.969: 77.1115% ( 79) 00:07:23.165 8065.969 - 8116.382: 77.7751% ( 110) 00:07:23.165 8116.382 - 8166.794: 78.2819% ( 84) 00:07:23.165 8166.794 - 8217.206: 78.7645% ( 80) 00:07:23.165 8217.206 - 8267.618: 79.2531% ( 81) 00:07:23.165 8267.618 - 8318.031: 79.7720% ( 86) 00:07:23.165 8318.031 - 8368.443: 80.1158% ( 57) 00:07:23.165 8368.443 - 8418.855: 80.3571% ( 40) 00:07:23.165 8418.855 - 8469.268: 80.5743% ( 36) 00:07:23.165 8469.268 - 8519.680: 80.7674% ( 32) 00:07:23.165 8519.680 - 8570.092: 81.1052% ( 56) 00:07:23.165 8570.092 - 8620.505: 81.3827% ( 46) 00:07:23.165 8620.505 - 8670.917: 81.6059% ( 37) 00:07:23.165 8670.917 - 8721.329: 81.9920% ( 64) 00:07:23.165 8721.329 - 8771.742: 82.3540% ( 60) 00:07:23.165 8771.742 - 8822.154: 82.5772% ( 37) 00:07:23.165 8822.154 - 8872.566: 82.8185% ( 40) 00:07:23.165 8872.566 - 8922.978: 83.0900% ( 45) 00:07:23.165 8922.978 - 8973.391: 83.4882% ( 66) 00:07:23.165 8973.391 - 9023.803: 83.7657% ( 46) 00:07:23.165 9023.803 - 9074.215: 84.2242% ( 76) 00:07:23.165 9074.215 - 9124.628: 84.6404% ( 69) 00:07:23.165 9124.628 - 9175.040: 84.9542% ( 52) 00:07:23.165 9175.040 - 9225.452: 85.3764% ( 70) 00:07:23.165 9225.452 - 9275.865: 85.5574% ( 30) 00:07:23.165 9275.865 - 9326.277: 85.7806% ( 37) 00:07:23.165 9326.277 - 9376.689: 85.9677% ( 31) 00:07:23.165 9376.689 - 9427.102: 86.1306% ( 27) 00:07:23.165 9427.102 - 9477.514: 86.3357% ( 34) 00:07:23.165 9477.514 - 9527.926: 86.5528% ( 36) 00:07:23.165 9527.926 - 9578.338: 86.7761% ( 37) 00:07:23.165 9578.338 - 9628.751: 87.0355% ( 43) 00:07:23.165 9628.751 - 9679.163: 87.3190% ( 47) 00:07:23.165 9679.163 - 9729.575: 87.5362% ( 36) 00:07:23.165 9729.575 - 9779.988: 87.9525% ( 69) 00:07:23.165 9779.988 - 9830.400: 88.1817% ( 38) 00:07:23.165 9830.400 - 9880.812: 88.4351% ( 42) 00:07:23.165 9880.812 - 9931.225: 88.7307% ( 49) 00:07:23.165 9931.225 - 9981.637: 89.0142% ( 47) 00:07:23.165 9981.637 - 10032.049: 89.2495% ( 39) 00:07:23.165 10032.049 - 10082.462: 89.5331% ( 47) 00:07:23.165 10082.462 - 10132.874: 89.8287% ( 49) 00:07:23.165 10132.874 - 10183.286: 90.1665% ( 56) 00:07:23.165 10183.286 - 10233.698: 90.3837% ( 36) 00:07:23.165 10233.698 - 10284.111: 90.5526% ( 28) 00:07:23.165 10284.111 - 10334.523: 90.7758% ( 37) 00:07:23.165 10334.523 - 10384.935: 90.9327% ( 26) 00:07:23.165 10384.935 - 10435.348: 91.1378% ( 34) 00:07:23.165 10435.348 - 10485.760: 91.3670% ( 38) 00:07:23.165 10485.760 - 10536.172: 91.6566% ( 48) 00:07:23.165 10536.172 - 10586.585: 91.9040% ( 41) 00:07:23.165 10586.585 - 10636.997: 92.1030% ( 33) 00:07:23.165 10636.997 - 10687.409: 92.2539% ( 25) 00:07:23.165 10687.409 - 10737.822: 92.4107% ( 26) 00:07:23.165 10737.822 - 10788.234: 92.5555% ( 24) 00:07:23.165 10788.234 - 10838.646: 92.6943% ( 23) 00:07:23.165 10838.646 - 10889.058: 92.9899% ( 49) 00:07:23.165 10889.058 - 10939.471: 93.3398% ( 58) 00:07:23.165 10939.471 - 10989.883: 93.6354% ( 49) 00:07:23.165 10989.883 - 11040.295: 93.9189% ( 47) 00:07:23.165 11040.295 - 11090.708: 94.1301% ( 35) 00:07:23.165 11090.708 - 11141.120: 94.3171% ( 31) 00:07:23.165 11141.120 - 11191.532: 94.5162% ( 33) 00:07:23.165 11191.532 - 11241.945: 94.7394% ( 37) 00:07:23.165 11241.945 - 11292.357: 94.9445% ( 34) 00:07:23.165 11292.357 - 11342.769: 95.1375% ( 32) 00:07:23.165 11342.769 - 11393.182: 95.3125% ( 29) 00:07:23.165 11393.182 - 11443.594: 95.4513% ( 23) 00:07:23.165 11443.594 - 11494.006: 95.6021% ( 25) 00:07:23.165 11494.006 - 11544.418: 95.7288% ( 21) 00:07:23.165 11544.418 - 11594.831: 95.8374% ( 18) 00:07:23.165 11594.831 - 11645.243: 95.9218% ( 14) 00:07:23.165 11645.243 - 11695.655: 96.0304% ( 18) 00:07:23.165 11695.655 - 11746.068: 96.1692% ( 23) 00:07:23.165 11746.068 - 11796.480: 96.2717% ( 17) 00:07:23.165 11796.480 - 11846.892: 96.4467% ( 29) 00:07:23.165 11846.892 - 11897.305: 96.5553% ( 18) 00:07:23.165 11897.305 - 11947.717: 96.6035% ( 8) 00:07:23.165 11947.717 - 11998.129: 96.6819% ( 13) 00:07:23.165 11998.129 - 12048.542: 96.7362% ( 9) 00:07:23.165 12048.542 - 12098.954: 96.8147% ( 13) 00:07:23.165 12098.954 - 12149.366: 96.8690% ( 9) 00:07:23.165 12149.366 - 12199.778: 96.9293% ( 10) 00:07:23.165 12199.778 - 12250.191: 96.9836% ( 9) 00:07:23.165 12250.191 - 12300.603: 97.0500% ( 11) 00:07:23.165 12300.603 - 12351.015: 97.1344% ( 14) 00:07:23.165 12351.015 - 12401.428: 97.2189% ( 14) 00:07:23.165 12401.428 - 12451.840: 97.3094% ( 15) 00:07:23.165 12451.840 - 12502.252: 97.4300% ( 20) 00:07:23.165 12502.252 - 12552.665: 97.5024% ( 12) 00:07:23.165 12552.665 - 12603.077: 97.5869% ( 14) 00:07:23.165 12603.077 - 12653.489: 97.6774% ( 15) 00:07:23.165 12653.489 - 12703.902: 97.7498% ( 12) 00:07:23.165 12703.902 - 12754.314: 97.8041% ( 9) 00:07:23.165 12754.314 - 12804.726: 97.8342% ( 5) 00:07:23.165 12804.726 - 12855.138: 97.8825% ( 8) 00:07:23.165 12855.138 - 12905.551: 97.9488% ( 11) 00:07:23.165 12905.551 - 13006.375: 98.0755% ( 21) 00:07:23.165 13006.375 - 13107.200: 98.1721% ( 16) 00:07:23.165 13107.200 - 13208.025: 98.2686% ( 16) 00:07:23.165 13208.025 - 13308.849: 98.3410% ( 12) 00:07:23.165 13308.849 - 13409.674: 98.3953% ( 9) 00:07:23.165 13409.674 - 13510.498: 98.4677% ( 12) 00:07:23.165 13510.498 - 13611.323: 98.5159% ( 8) 00:07:23.165 13611.323 - 13712.148: 98.5521% ( 6) 00:07:23.165 13712.148 - 13812.972: 98.6064% ( 9) 00:07:23.165 13812.972 - 13913.797: 98.6547% ( 8) 00:07:23.165 13913.797 - 14014.622: 98.7090% ( 9) 00:07:23.165 14014.622 - 14115.446: 98.7452% ( 6) 00:07:23.165 14115.446 - 14216.271: 98.7814% ( 6) 00:07:23.165 14216.271 - 14317.095: 98.8176% ( 6) 00:07:23.165 14317.095 - 14417.920: 98.8598% ( 7) 00:07:23.165 14417.920 - 14518.745: 98.9020% ( 7) 00:07:23.165 14518.745 - 14619.569: 98.9443% ( 7) 00:07:23.165 14619.569 - 14720.394: 98.9925% ( 8) 00:07:23.165 14720.394 - 14821.218: 99.0287% ( 6) 00:07:23.165 14821.218 - 14922.043: 99.0709% ( 7) 00:07:23.165 14922.043 - 15022.868: 99.1071% ( 6) 00:07:23.165 15022.868 - 15123.692: 99.1373% ( 5) 00:07:23.165 15123.692 - 15224.517: 99.1554% ( 3) 00:07:23.165 15224.517 - 15325.342: 99.1735% ( 3) 00:07:23.165 15325.342 - 15426.166: 99.1976% ( 4) 00:07:23.165 15426.166 - 15526.991: 99.2157% ( 3) 00:07:23.165 15526.991 - 15627.815: 99.2278% ( 2) 00:07:23.165 24097.083 - 24197.908: 99.2399% ( 2) 00:07:23.165 24197.908 - 24298.732: 99.2640% ( 4) 00:07:23.165 24298.732 - 24399.557: 99.2881% ( 4) 00:07:23.165 24399.557 - 24500.382: 99.3123% ( 4) 00:07:23.165 24500.382 - 24601.206: 99.3304% ( 3) 00:07:23.165 24601.206 - 24702.031: 99.3545% ( 4) 00:07:23.165 24702.031 - 24802.855: 99.3726% ( 3) 00:07:23.165 24802.855 - 24903.680: 99.3967% ( 4) 00:07:23.165 24903.680 - 25004.505: 99.4148% ( 3) 00:07:23.165 25004.505 - 25105.329: 99.4389% ( 4) 00:07:23.165 25105.329 - 25206.154: 99.4631% ( 4) 00:07:23.165 25206.154 - 25306.978: 99.4812% ( 3) 00:07:23.165 25306.978 - 25407.803: 99.5053% ( 4) 00:07:23.165 25407.803 - 25508.628: 99.5234% ( 3) 00:07:23.165 25508.628 - 25609.452: 99.5475% ( 4) 00:07:23.165 25609.452 - 25710.277: 99.5656% ( 3) 00:07:23.165 25710.277 - 25811.102: 99.5898% ( 4) 00:07:23.165 25811.102 - 26012.751: 99.6139% ( 4) 00:07:23.165 28634.191 - 28835.840: 99.6380% ( 4) 00:07:23.165 28835.840 - 29037.489: 99.6803% ( 7) 00:07:23.165 29037.489 - 29239.138: 99.7225% ( 7) 00:07:23.165 29239.138 - 29440.788: 99.7647% ( 7) 00:07:23.165 29440.788 - 29642.437: 99.8009% ( 6) 00:07:23.165 29642.437 - 29844.086: 99.8431% ( 7) 00:07:23.165 29844.086 - 30045.735: 99.8914% ( 8) 00:07:23.165 30045.735 - 30247.385: 99.9276% ( 6) 00:07:23.165 30247.385 - 30449.034: 99.9698% ( 7) 00:07:23.165 30449.034 - 30650.683: 100.0000% ( 5) 00:07:23.165 00:07:23.165 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:23.165 ============================================================================== 00:07:23.165 Range in us Cumulative IO count 00:07:23.165 5898.240 - 5923.446: 0.0060% ( 1) 00:07:23.165 5923.446 - 5948.652: 0.0121% ( 1) 00:07:23.165 5948.652 - 5973.858: 0.0241% ( 2) 00:07:23.165 5973.858 - 5999.065: 0.0302% ( 1) 00:07:23.165 5999.065 - 6024.271: 0.0905% ( 10) 00:07:23.165 6024.271 - 6049.477: 0.1750% ( 14) 00:07:23.165 6049.477 - 6074.683: 0.2473% ( 12) 00:07:23.165 6074.683 - 6099.889: 0.3439% ( 16) 00:07:23.165 6099.889 - 6125.095: 0.4826% ( 23) 00:07:23.165 6125.095 - 6150.302: 0.8386% ( 59) 00:07:23.165 6150.302 - 6175.508: 1.1342% ( 49) 00:07:23.165 6175.508 - 6200.714: 1.2669% ( 22) 00:07:23.165 6200.714 - 6225.920: 1.5746% ( 51) 00:07:23.165 6225.920 - 6251.126: 1.7857% ( 35) 00:07:23.165 6251.126 - 6276.332: 2.1055% ( 53) 00:07:23.165 6276.332 - 6301.538: 2.4493% ( 57) 00:07:23.165 6301.538 - 6326.745: 3.0043% ( 92) 00:07:23.165 6326.745 - 6351.951: 3.8791% ( 145) 00:07:23.165 6351.951 - 6377.157: 4.9107% ( 171) 00:07:23.165 6377.157 - 6402.363: 6.2741% ( 226) 00:07:23.165 6402.363 - 6427.569: 7.9030% ( 270) 00:07:23.165 6427.569 - 6452.775: 9.4414% ( 255) 00:07:23.165 6452.775 - 6503.188: 13.3627% ( 650) 00:07:23.165 6503.188 - 6553.600: 17.0548% ( 612) 00:07:23.166 6553.600 - 6604.012: 20.8555% ( 630) 00:07:23.166 6604.012 - 6654.425: 24.4148% ( 590) 00:07:23.166 6654.425 - 6704.837: 28.9515% ( 752) 00:07:23.166 6704.837 - 6755.249: 33.1503% ( 696) 00:07:23.166 6755.249 - 6805.662: 37.4819% ( 718) 00:07:23.166 6805.662 - 6856.074: 42.7968% ( 881) 00:07:23.166 6856.074 - 6906.486: 47.1766% ( 726) 00:07:23.166 6906.486 - 6956.898: 50.7179% ( 587) 00:07:23.166 6956.898 - 7007.311: 54.1928% ( 576) 00:07:23.166 7007.311 - 7057.723: 57.0644% ( 476) 00:07:23.166 7057.723 - 7108.135: 59.5500% ( 412) 00:07:23.166 7108.135 - 7158.548: 61.4382% ( 313) 00:07:23.166 7158.548 - 7208.960: 63.0852% ( 273) 00:07:23.166 7208.960 - 7259.372: 64.8890% ( 299) 00:07:23.166 7259.372 - 7309.785: 66.4696% ( 262) 00:07:23.166 7309.785 - 7360.197: 67.6943% ( 203) 00:07:23.166 7360.197 - 7410.609: 68.6957% ( 166) 00:07:23.166 7410.609 - 7461.022: 70.0712% ( 228) 00:07:23.166 7461.022 - 7511.434: 71.2174% ( 190) 00:07:23.166 7511.434 - 7561.846: 71.8991% ( 113) 00:07:23.166 7561.846 - 7612.258: 72.6834% ( 130) 00:07:23.166 7612.258 - 7662.671: 73.2505% ( 94) 00:07:23.166 7662.671 - 7713.083: 73.6064% ( 59) 00:07:23.166 7713.083 - 7763.495: 74.0770% ( 78) 00:07:23.166 7763.495 - 7813.908: 74.7165% ( 106) 00:07:23.166 7813.908 - 7864.320: 75.3197% ( 100) 00:07:23.166 7864.320 - 7914.732: 75.8929% ( 95) 00:07:23.166 7914.732 - 7965.145: 76.3453% ( 75) 00:07:23.166 7965.145 - 8015.557: 76.9727% ( 104) 00:07:23.166 8015.557 - 8065.969: 77.5639% ( 98) 00:07:23.166 8065.969 - 8116.382: 78.0526% ( 81) 00:07:23.166 8116.382 - 8166.794: 78.4266% ( 62) 00:07:23.166 8166.794 - 8217.206: 78.7222% ( 49) 00:07:23.166 8217.206 - 8267.618: 79.0661% ( 57) 00:07:23.166 8267.618 - 8318.031: 79.4824% ( 69) 00:07:23.166 8318.031 - 8368.443: 79.9409% ( 76) 00:07:23.166 8368.443 - 8418.855: 80.2908% ( 58) 00:07:23.166 8418.855 - 8469.268: 80.6347% ( 57) 00:07:23.166 8469.268 - 8519.680: 81.1112% ( 79) 00:07:23.166 8519.680 - 8570.092: 81.4129% ( 50) 00:07:23.166 8570.092 - 8620.505: 81.7809% ( 61) 00:07:23.166 8620.505 - 8670.917: 82.0765% ( 49) 00:07:23.166 8670.917 - 8721.329: 82.2816% ( 34) 00:07:23.166 8721.329 - 8771.742: 82.4807% ( 33) 00:07:23.166 8771.742 - 8822.154: 82.6737% ( 32) 00:07:23.166 8822.154 - 8872.566: 82.8366% ( 27) 00:07:23.166 8872.566 - 8922.978: 83.1202% ( 47) 00:07:23.166 8922.978 - 8973.391: 83.4278% ( 51) 00:07:23.166 8973.391 - 9023.803: 83.6149% ( 31) 00:07:23.166 9023.803 - 9074.215: 83.7295% ( 19) 00:07:23.166 9074.215 - 9124.628: 84.0130% ( 47) 00:07:23.166 9124.628 - 9175.040: 84.3328% ( 53) 00:07:23.166 9175.040 - 9225.452: 84.6223% ( 48) 00:07:23.166 9225.452 - 9275.865: 84.9903% ( 61) 00:07:23.166 9275.865 - 9326.277: 85.3161% ( 54) 00:07:23.166 9326.277 - 9376.689: 85.6359% ( 53) 00:07:23.166 9376.689 - 9427.102: 85.8651% ( 38) 00:07:23.166 9427.102 - 9477.514: 86.2210% ( 59) 00:07:23.166 9477.514 - 9527.926: 86.4986% ( 46) 00:07:23.166 9527.926 - 9578.338: 86.7821% ( 47) 00:07:23.166 9578.338 - 9628.751: 87.1018% ( 53) 00:07:23.166 9628.751 - 9679.163: 87.3130% ( 35) 00:07:23.166 9679.163 - 9729.575: 87.5302% ( 36) 00:07:23.166 9729.575 - 9779.988: 87.7353% ( 34) 00:07:23.166 9779.988 - 9830.400: 87.8982% ( 27) 00:07:23.166 9830.400 - 9880.812: 88.1757% ( 46) 00:07:23.166 9880.812 - 9931.225: 88.6221% ( 74) 00:07:23.166 9931.225 - 9981.637: 89.0263% ( 67) 00:07:23.166 9981.637 - 10032.049: 89.4848% ( 76) 00:07:23.166 10032.049 - 10082.462: 89.7744% ( 48) 00:07:23.166 10082.462 - 10132.874: 90.2268% ( 75) 00:07:23.166 10132.874 - 10183.286: 90.4018% ( 29) 00:07:23.166 10183.286 - 10233.698: 90.5586% ( 26) 00:07:23.166 10233.698 - 10284.111: 90.7095% ( 25) 00:07:23.166 10284.111 - 10334.523: 90.8361% ( 21) 00:07:23.166 10334.523 - 10384.935: 90.9689% ( 22) 00:07:23.166 10384.935 - 10435.348: 91.0895% ( 20) 00:07:23.166 10435.348 - 10485.760: 91.2102% ( 20) 00:07:23.166 10485.760 - 10536.172: 91.3308% ( 20) 00:07:23.166 10536.172 - 10586.585: 91.4817% ( 25) 00:07:23.166 10586.585 - 10636.997: 91.6687% ( 31) 00:07:23.166 10636.997 - 10687.409: 91.8497% ( 30) 00:07:23.166 10687.409 - 10737.822: 92.2177% ( 61) 00:07:23.166 10737.822 - 10788.234: 92.5495% ( 55) 00:07:23.166 10788.234 - 10838.646: 92.7486% ( 33) 00:07:23.166 10838.646 - 10889.058: 92.9959% ( 41) 00:07:23.166 10889.058 - 10939.471: 93.1467% ( 25) 00:07:23.166 10939.471 - 10989.883: 93.2734% ( 21) 00:07:23.166 10989.883 - 11040.295: 93.3880% ( 19) 00:07:23.166 11040.295 - 11090.708: 93.5087% ( 20) 00:07:23.166 11090.708 - 11141.120: 93.6776% ( 28) 00:07:23.166 11141.120 - 11191.532: 93.8948% ( 36) 00:07:23.166 11191.532 - 11241.945: 94.2568% ( 60) 00:07:23.166 11241.945 - 11292.357: 94.4739% ( 36) 00:07:23.166 11292.357 - 11342.769: 94.7273% ( 42) 00:07:23.166 11342.769 - 11393.182: 94.9023% ( 29) 00:07:23.166 11393.182 - 11443.594: 95.0531% ( 25) 00:07:23.166 11443.594 - 11494.006: 95.2703% ( 36) 00:07:23.166 11494.006 - 11544.418: 95.3608% ( 15) 00:07:23.166 11544.418 - 11594.831: 95.4573% ( 16) 00:07:23.166 11594.831 - 11645.243: 95.5538% ( 16) 00:07:23.166 11645.243 - 11695.655: 95.7770% ( 37) 00:07:23.166 11695.655 - 11746.068: 95.8977% ( 20) 00:07:23.166 11746.068 - 11796.480: 96.0787% ( 30) 00:07:23.166 11796.480 - 11846.892: 96.2476% ( 28) 00:07:23.166 11846.892 - 11897.305: 96.3260% ( 13) 00:07:23.166 11897.305 - 11947.717: 96.4044% ( 13) 00:07:23.166 11947.717 - 11998.129: 96.5010% ( 16) 00:07:23.166 11998.129 - 12048.542: 96.5854% ( 14) 00:07:23.166 12048.542 - 12098.954: 96.6639% ( 13) 00:07:23.166 12098.954 - 12149.366: 96.7785% ( 19) 00:07:23.166 12149.366 - 12199.778: 96.9112% ( 22) 00:07:23.166 12199.778 - 12250.191: 97.0560% ( 24) 00:07:23.166 12250.191 - 12300.603: 97.1887% ( 22) 00:07:23.166 12300.603 - 12351.015: 97.3154% ( 21) 00:07:23.166 12351.015 - 12401.428: 97.4903% ( 29) 00:07:23.166 12401.428 - 12451.840: 97.5567% ( 11) 00:07:23.166 12451.840 - 12502.252: 97.6110% ( 9) 00:07:23.166 12502.252 - 12552.665: 97.6955% ( 14) 00:07:23.166 12552.665 - 12603.077: 97.7618% ( 11) 00:07:23.166 12603.077 - 12653.489: 97.8282% ( 11) 00:07:23.166 12653.489 - 12703.902: 97.9006% ( 12) 00:07:23.166 12703.902 - 12754.314: 97.9428% ( 7) 00:07:23.166 12754.314 - 12804.726: 97.9911% ( 8) 00:07:23.166 12804.726 - 12855.138: 98.0333% ( 7) 00:07:23.166 12855.138 - 12905.551: 98.0635% ( 5) 00:07:23.166 12905.551 - 13006.375: 98.1298% ( 11) 00:07:23.166 13006.375 - 13107.200: 98.1660% ( 6) 00:07:23.166 13107.200 - 13208.025: 98.1962% ( 5) 00:07:23.166 13208.025 - 13308.849: 98.2203% ( 4) 00:07:23.166 13308.849 - 13409.674: 98.2444% ( 4) 00:07:23.166 13409.674 - 13510.498: 98.2746% ( 5) 00:07:23.166 13510.498 - 13611.323: 98.3530% ( 13) 00:07:23.166 13611.323 - 13712.148: 98.4194% ( 11) 00:07:23.166 13712.148 - 13812.972: 98.4677% ( 8) 00:07:23.166 13812.972 - 13913.797: 98.5159% ( 8) 00:07:23.166 13913.797 - 14014.622: 98.5582% ( 7) 00:07:23.166 14014.622 - 14115.446: 98.5823% ( 4) 00:07:23.166 14115.446 - 14216.271: 98.6064% ( 4) 00:07:23.166 14216.271 - 14317.095: 98.6366% ( 5) 00:07:23.166 14317.095 - 14417.920: 98.6848% ( 8) 00:07:23.166 14417.920 - 14518.745: 98.7331% ( 8) 00:07:23.166 14518.745 - 14619.569: 98.7934% ( 10) 00:07:23.166 14619.569 - 14720.394: 98.8417% ( 8) 00:07:23.166 14720.394 - 14821.218: 98.8900% ( 8) 00:07:23.166 14821.218 - 14922.043: 98.9805% ( 15) 00:07:23.166 14922.043 - 15022.868: 99.1976% ( 36) 00:07:23.166 15022.868 - 15123.692: 99.2218% ( 4) 00:07:23.166 15123.692 - 15224.517: 99.2278% ( 1) 00:07:23.166 22483.889 - 22584.714: 99.2338% ( 1) 00:07:23.166 22584.714 - 22685.538: 99.2580% ( 4) 00:07:23.166 22685.538 - 22786.363: 99.2821% ( 4) 00:07:23.166 22786.363 - 22887.188: 99.3062% ( 4) 00:07:23.166 22887.188 - 22988.012: 99.3364% ( 5) 00:07:23.166 22988.012 - 23088.837: 99.3605% ( 4) 00:07:23.166 23088.837 - 23189.662: 99.3847% ( 4) 00:07:23.166 23189.662 - 23290.486: 99.4028% ( 3) 00:07:23.166 23290.486 - 23391.311: 99.4329% ( 5) 00:07:23.166 23391.311 - 23492.135: 99.4570% ( 4) 00:07:23.166 23492.135 - 23592.960: 99.4812% ( 4) 00:07:23.166 23592.960 - 23693.785: 99.5053% ( 4) 00:07:23.166 23693.785 - 23794.609: 99.5294% ( 4) 00:07:23.166 23794.609 - 23895.434: 99.5536% ( 4) 00:07:23.166 23895.434 - 23996.258: 99.5777% ( 4) 00:07:23.166 23996.258 - 24097.083: 99.6018% ( 4) 00:07:23.166 24097.083 - 24197.908: 99.6139% ( 2) 00:07:23.166 27020.997 - 27222.646: 99.6501% ( 6) 00:07:23.166 27222.646 - 27424.295: 99.6984% ( 8) 00:07:23.166 27424.295 - 27625.945: 99.7466% ( 8) 00:07:23.166 27625.945 - 27827.594: 99.7949% ( 8) 00:07:23.166 27827.594 - 28029.243: 99.8431% ( 8) 00:07:23.166 28029.243 - 28230.892: 99.8974% ( 9) 00:07:23.166 28230.892 - 28432.542: 99.9457% ( 8) 00:07:23.166 28432.542 - 28634.191: 99.9940% ( 8) 00:07:23.166 28634.191 - 28835.840: 100.0000% ( 1) 00:07:23.166 00:07:23.166 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:23.166 ============================================================================== 00:07:23.166 Range in us Cumulative IO count 00:07:23.166 5822.622 - 5847.828: 0.0060% ( 1) 00:07:23.166 5847.828 - 5873.034: 0.0121% ( 1) 00:07:23.166 5873.034 - 5898.240: 0.0181% ( 1) 00:07:23.166 5898.240 - 5923.446: 0.0302% ( 2) 00:07:23.166 5923.446 - 5948.652: 0.0603% ( 5) 00:07:23.166 5948.652 - 5973.858: 0.0784% ( 3) 00:07:23.166 5973.858 - 5999.065: 0.1146% ( 6) 00:07:23.166 5999.065 - 6024.271: 0.1750% ( 10) 00:07:23.166 6024.271 - 6049.477: 0.2594% ( 14) 00:07:23.166 6049.477 - 6074.683: 0.3620% ( 17) 00:07:23.166 6074.683 - 6099.889: 0.5007% ( 23) 00:07:23.166 6099.889 - 6125.095: 0.8205% ( 53) 00:07:23.166 6125.095 - 6150.302: 1.0135% ( 32) 00:07:23.166 6150.302 - 6175.508: 1.2307% ( 36) 00:07:23.166 6175.508 - 6200.714: 1.6228% ( 65) 00:07:23.166 6200.714 - 6225.920: 1.9788% ( 59) 00:07:23.166 6225.920 - 6251.126: 2.4011% ( 70) 00:07:23.166 6251.126 - 6276.332: 2.8354% ( 72) 00:07:23.166 6276.332 - 6301.538: 3.3482% ( 85) 00:07:23.167 6301.538 - 6326.745: 4.0118% ( 110) 00:07:23.167 6326.745 - 6351.951: 4.9047% ( 148) 00:07:23.167 6351.951 - 6377.157: 5.8217% ( 152) 00:07:23.167 6377.157 - 6402.363: 6.8895% ( 177) 00:07:23.167 6402.363 - 6427.569: 8.2348% ( 223) 00:07:23.167 6427.569 - 6452.775: 9.8818% ( 273) 00:07:23.167 6452.775 - 6503.188: 12.8077% ( 485) 00:07:23.167 6503.188 - 6553.600: 16.4998% ( 612) 00:07:23.167 6553.600 - 6604.012: 20.6745% ( 692) 00:07:23.167 6604.012 - 6654.425: 24.3907% ( 616) 00:07:23.167 6654.425 - 6704.837: 28.5111% ( 683) 00:07:23.167 6704.837 - 6755.249: 33.6571% ( 853) 00:07:23.167 6755.249 - 6805.662: 38.0128% ( 722) 00:07:23.167 6805.662 - 6856.074: 42.5193% ( 747) 00:07:23.167 6856.074 - 6906.486: 46.7905% ( 708) 00:07:23.167 6906.486 - 6956.898: 50.7541% ( 657) 00:07:23.167 6956.898 - 7007.311: 54.3014% ( 588) 00:07:23.167 7007.311 - 7057.723: 56.9920% ( 446) 00:07:23.167 7057.723 - 7108.135: 59.3991% ( 399) 00:07:23.167 7108.135 - 7158.548: 61.4443% ( 339) 00:07:23.167 7158.548 - 7208.960: 63.0309% ( 263) 00:07:23.167 7208.960 - 7259.372: 64.7744% ( 289) 00:07:23.167 7259.372 - 7309.785: 66.0051% ( 204) 00:07:23.167 7309.785 - 7360.197: 67.4529% ( 240) 00:07:23.167 7360.197 - 7410.609: 68.6836% ( 204) 00:07:23.167 7410.609 - 7461.022: 69.5162% ( 138) 00:07:23.167 7461.022 - 7511.434: 70.5840% ( 177) 00:07:23.167 7511.434 - 7561.846: 71.4527% ( 144) 00:07:23.167 7561.846 - 7612.258: 72.2068% ( 125) 00:07:23.167 7612.258 - 7662.671: 72.7920% ( 97) 00:07:23.167 7662.671 - 7713.083: 73.4254% ( 105) 00:07:23.167 7713.083 - 7763.495: 73.8477% ( 70) 00:07:23.167 7763.495 - 7813.908: 74.4932% ( 107) 00:07:23.167 7813.908 - 7864.320: 75.4163% ( 153) 00:07:23.167 7864.320 - 7914.732: 75.8567% ( 73) 00:07:23.167 7914.732 - 7965.145: 76.3031% ( 74) 00:07:23.167 7965.145 - 8015.557: 76.8702% ( 94) 00:07:23.167 8015.557 - 8065.969: 77.2563% ( 64) 00:07:23.167 8065.969 - 8116.382: 77.4916% ( 39) 00:07:23.167 8116.382 - 8166.794: 77.7751% ( 47) 00:07:23.167 8166.794 - 8217.206: 77.9983% ( 37) 00:07:23.167 8217.206 - 8267.618: 78.3241% ( 54) 00:07:23.167 8267.618 - 8318.031: 78.6257% ( 50) 00:07:23.167 8318.031 - 8368.443: 78.9455% ( 53) 00:07:23.167 8368.443 - 8418.855: 79.3255% ( 63) 00:07:23.167 8418.855 - 8469.268: 79.7539% ( 71) 00:07:23.167 8469.268 - 8519.680: 80.3390% ( 97) 00:07:23.167 8519.680 - 8570.092: 80.7372% ( 66) 00:07:23.167 8570.092 - 8620.505: 81.1233% ( 64) 00:07:23.167 8620.505 - 8670.917: 81.5094% ( 64) 00:07:23.167 8670.917 - 8721.329: 81.8834% ( 62) 00:07:23.167 8721.329 - 8771.742: 82.2153% ( 55) 00:07:23.167 8771.742 - 8822.154: 82.4867% ( 45) 00:07:23.167 8822.154 - 8872.566: 82.6918% ( 34) 00:07:23.167 8872.566 - 8922.978: 82.9694% ( 46) 00:07:23.167 8922.978 - 8973.391: 83.2529% ( 47) 00:07:23.167 8973.391 - 9023.803: 83.6631% ( 68) 00:07:23.167 9023.803 - 9074.215: 84.0372% ( 62) 00:07:23.167 9074.215 - 9124.628: 84.3931% ( 59) 00:07:23.167 9124.628 - 9175.040: 84.7249% ( 55) 00:07:23.167 9175.040 - 9225.452: 84.9602% ( 39) 00:07:23.167 9225.452 - 9275.865: 85.1834% ( 37) 00:07:23.167 9275.865 - 9326.277: 85.4187% ( 39) 00:07:23.167 9326.277 - 9376.689: 85.7324% ( 52) 00:07:23.167 9376.689 - 9427.102: 86.2452% ( 85) 00:07:23.167 9427.102 - 9477.514: 86.5649% ( 53) 00:07:23.167 9477.514 - 9527.926: 86.8485% ( 47) 00:07:23.167 9527.926 - 9578.338: 87.0898% ( 40) 00:07:23.167 9578.338 - 9628.751: 87.2587% ( 28) 00:07:23.167 9628.751 - 9679.163: 87.4336% ( 29) 00:07:23.167 9679.163 - 9729.575: 87.6026% ( 28) 00:07:23.167 9729.575 - 9779.988: 87.8137% ( 35) 00:07:23.167 9779.988 - 9830.400: 88.0430% ( 38) 00:07:23.167 9830.400 - 9880.812: 88.3808% ( 56) 00:07:23.167 9880.812 - 9931.225: 88.6643% ( 47) 00:07:23.167 9931.225 - 9981.637: 88.9780% ( 52) 00:07:23.167 9981.637 - 10032.049: 89.2556% ( 46) 00:07:23.167 10032.049 - 10082.462: 89.4969% ( 40) 00:07:23.167 10082.462 - 10132.874: 89.8226% ( 54) 00:07:23.167 10132.874 - 10183.286: 90.1544% ( 55) 00:07:23.167 10183.286 - 10233.698: 90.4380% ( 47) 00:07:23.167 10233.698 - 10284.111: 90.6914% ( 42) 00:07:23.167 10284.111 - 10334.523: 91.1016% ( 68) 00:07:23.167 10334.523 - 10384.935: 91.3248% ( 37) 00:07:23.167 10384.935 - 10435.348: 91.4877% ( 27) 00:07:23.167 10435.348 - 10485.760: 91.6626% ( 29) 00:07:23.167 10485.760 - 10536.172: 91.8195% ( 26) 00:07:23.167 10536.172 - 10586.585: 91.9703% ( 25) 00:07:23.167 10586.585 - 10636.997: 92.1091% ( 23) 00:07:23.167 10636.997 - 10687.409: 92.3021% ( 32) 00:07:23.167 10687.409 - 10737.822: 92.5736% ( 45) 00:07:23.167 10737.822 - 10788.234: 92.8873% ( 52) 00:07:23.167 10788.234 - 10838.646: 93.1347% ( 41) 00:07:23.167 10838.646 - 10889.058: 93.2915% ( 26) 00:07:23.167 10889.058 - 10939.471: 93.4423% ( 25) 00:07:23.167 10939.471 - 10989.883: 93.5449% ( 17) 00:07:23.167 10989.883 - 11040.295: 93.7560% ( 35) 00:07:23.167 11040.295 - 11090.708: 93.9672% ( 35) 00:07:23.167 11090.708 - 11141.120: 94.0758% ( 18) 00:07:23.167 11141.120 - 11191.532: 94.1723% ( 16) 00:07:23.167 11191.532 - 11241.945: 94.2447% ( 12) 00:07:23.167 11241.945 - 11292.357: 94.2809% ( 6) 00:07:23.167 11292.357 - 11342.769: 94.3412% ( 10) 00:07:23.167 11342.769 - 11393.182: 94.4317% ( 15) 00:07:23.167 11393.182 - 11443.594: 94.6067% ( 29) 00:07:23.167 11443.594 - 11494.006: 94.7153% ( 18) 00:07:23.167 11494.006 - 11544.418: 94.8299% ( 19) 00:07:23.167 11544.418 - 11594.831: 94.9264% ( 16) 00:07:23.167 11594.831 - 11645.243: 95.1074% ( 30) 00:07:23.167 11645.243 - 11695.655: 95.2522% ( 24) 00:07:23.167 11695.655 - 11746.068: 95.5116% ( 43) 00:07:23.167 11746.068 - 11796.480: 95.7046% ( 32) 00:07:23.167 11796.480 - 11846.892: 95.8313% ( 21) 00:07:23.167 11846.892 - 11897.305: 95.9459% ( 19) 00:07:23.167 11897.305 - 11947.717: 96.0485% ( 17) 00:07:23.167 11947.717 - 11998.129: 96.2295% ( 30) 00:07:23.167 11998.129 - 12048.542: 96.4467% ( 36) 00:07:23.167 12048.542 - 12098.954: 96.7302% ( 47) 00:07:23.167 12098.954 - 12149.366: 96.8991% ( 28) 00:07:23.167 12149.366 - 12199.778: 97.2128% ( 52) 00:07:23.167 12199.778 - 12250.191: 97.3878% ( 29) 00:07:23.167 12250.191 - 12300.603: 97.4662% ( 13) 00:07:23.167 12300.603 - 12351.015: 97.5567% ( 15) 00:07:23.167 12351.015 - 12401.428: 97.6170% ( 10) 00:07:23.167 12401.428 - 12451.840: 97.6532% ( 6) 00:07:23.167 12451.840 - 12502.252: 97.6894% ( 6) 00:07:23.167 12502.252 - 12552.665: 97.7196% ( 5) 00:07:23.167 12552.665 - 12603.077: 97.7558% ( 6) 00:07:23.167 12603.077 - 12653.489: 97.7739% ( 3) 00:07:23.167 12653.489 - 12703.902: 97.7920% ( 3) 00:07:23.167 12703.902 - 12754.314: 97.8161% ( 4) 00:07:23.167 12754.314 - 12804.726: 97.8644% ( 8) 00:07:23.167 12804.726 - 12855.138: 97.9006% ( 6) 00:07:23.167 12855.138 - 12905.551: 97.9428% ( 7) 00:07:23.167 12905.551 - 13006.375: 98.0092% ( 11) 00:07:23.167 13006.375 - 13107.200: 98.0876% ( 13) 00:07:23.167 13107.200 - 13208.025: 98.1359% ( 8) 00:07:23.167 13208.025 - 13308.849: 98.1841% ( 8) 00:07:23.167 13308.849 - 13409.674: 98.2505% ( 11) 00:07:23.167 13409.674 - 13510.498: 98.3108% ( 10) 00:07:23.167 13510.498 - 13611.323: 98.3591% ( 8) 00:07:23.167 13611.323 - 13712.148: 98.4254% ( 11) 00:07:23.167 13712.148 - 13812.972: 98.4677% ( 7) 00:07:23.167 13812.972 - 13913.797: 98.5521% ( 14) 00:07:23.167 13913.797 - 14014.622: 98.6366% ( 14) 00:07:23.167 14014.622 - 14115.446: 98.7029% ( 11) 00:07:23.167 14115.446 - 14216.271: 98.7572% ( 9) 00:07:23.167 14216.271 - 14317.095: 98.8477% ( 15) 00:07:23.167 14317.095 - 14417.920: 99.1011% ( 42) 00:07:23.167 14417.920 - 14518.745: 99.1675% ( 11) 00:07:23.167 14518.745 - 14619.569: 99.2097% ( 7) 00:07:23.167 14619.569 - 14720.394: 99.2278% ( 3) 00:07:23.167 20769.871 - 20870.695: 99.2459% ( 3) 00:07:23.167 20870.695 - 20971.520: 99.2640% ( 3) 00:07:23.167 20971.520 - 21072.345: 99.2881% ( 4) 00:07:23.167 21072.345 - 21173.169: 99.3123% ( 4) 00:07:23.167 21173.169 - 21273.994: 99.3364% ( 4) 00:07:23.167 21273.994 - 21374.818: 99.3605% ( 4) 00:07:23.167 21374.818 - 21475.643: 99.3847% ( 4) 00:07:23.167 21475.643 - 21576.468: 99.4088% ( 4) 00:07:23.167 21576.468 - 21677.292: 99.4329% ( 4) 00:07:23.167 21677.292 - 21778.117: 99.4570% ( 4) 00:07:23.167 21778.117 - 21878.942: 99.4812% ( 4) 00:07:23.167 21878.942 - 21979.766: 99.4993% ( 3) 00:07:23.168 21979.766 - 22080.591: 99.5234% ( 4) 00:07:23.168 22080.591 - 22181.415: 99.5475% ( 4) 00:07:23.168 22181.415 - 22282.240: 99.5717% ( 4) 00:07:23.168 22282.240 - 22383.065: 99.5958% ( 4) 00:07:23.168 22383.065 - 22483.889: 99.6139% ( 3) 00:07:23.168 25306.978 - 25407.803: 99.6380% ( 4) 00:07:23.168 25407.803 - 25508.628: 99.6561% ( 3) 00:07:23.168 25508.628 - 25609.452: 99.6803% ( 4) 00:07:23.168 25609.452 - 25710.277: 99.7044% ( 4) 00:07:23.168 25710.277 - 25811.102: 99.7285% ( 4) 00:07:23.168 25811.102 - 26012.751: 99.7768% ( 8) 00:07:23.168 26012.751 - 26214.400: 99.8250% ( 8) 00:07:23.168 26214.400 - 26416.049: 99.8673% ( 7) 00:07:23.168 26416.049 - 26617.698: 99.9155% ( 8) 00:07:23.168 26617.698 - 26819.348: 99.9698% ( 9) 00:07:23.168 26819.348 - 27020.997: 100.0000% ( 5) 00:07:23.168 00:07:23.168 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:23.168 ============================================================================== 00:07:23.168 Range in us Cumulative IO count 00:07:23.168 5797.415 - 5822.622: 0.0060% ( 1) 00:07:23.168 5948.652 - 5973.858: 0.0121% ( 1) 00:07:23.168 5973.858 - 5999.065: 0.0241% ( 2) 00:07:23.168 5999.065 - 6024.271: 0.0302% ( 1) 00:07:23.168 6024.271 - 6049.477: 0.0422% ( 2) 00:07:23.168 6049.477 - 6074.683: 0.0905% ( 8) 00:07:23.168 6074.683 - 6099.889: 0.1508% ( 10) 00:07:23.168 6099.889 - 6125.095: 0.2413% ( 15) 00:07:23.168 6125.095 - 6150.302: 0.3680% ( 21) 00:07:23.168 6150.302 - 6175.508: 0.5671% ( 33) 00:07:23.168 6175.508 - 6200.714: 0.8205% ( 42) 00:07:23.168 6200.714 - 6225.920: 1.2850% ( 77) 00:07:23.168 6225.920 - 6251.126: 1.6228% ( 56) 00:07:23.168 6251.126 - 6276.332: 2.1658% ( 90) 00:07:23.168 6276.332 - 6301.538: 2.8294% ( 110) 00:07:23.168 6301.538 - 6326.745: 3.5895% ( 126) 00:07:23.168 6326.745 - 6351.951: 4.3135% ( 120) 00:07:23.168 6351.951 - 6377.157: 5.2908% ( 162) 00:07:23.168 6377.157 - 6402.363: 6.3948% ( 183) 00:07:23.168 6402.363 - 6427.569: 7.6074% ( 201) 00:07:23.168 6427.569 - 6452.775: 9.0191% ( 234) 00:07:23.168 6452.775 - 6503.188: 12.1863% ( 525) 00:07:23.168 6503.188 - 6553.600: 15.6733% ( 578) 00:07:23.168 6553.600 - 6604.012: 19.2869% ( 599) 00:07:23.168 6604.012 - 6654.425: 23.2927% ( 664) 00:07:23.168 6654.425 - 6704.837: 27.8716% ( 759) 00:07:23.168 6704.837 - 6755.249: 32.8789% ( 830) 00:07:23.168 6755.249 - 6805.662: 37.2346% ( 722) 00:07:23.168 6805.662 - 6856.074: 41.8014% ( 757) 00:07:23.168 6856.074 - 6906.486: 47.1284% ( 883) 00:07:23.168 6906.486 - 6956.898: 51.0256% ( 646) 00:07:23.168 6956.898 - 7007.311: 54.6935% ( 608) 00:07:23.168 7007.311 - 7057.723: 58.1081% ( 566) 00:07:23.168 7057.723 - 7108.135: 60.3764% ( 376) 00:07:23.168 7108.135 - 7158.548: 62.5181% ( 355) 00:07:23.168 7158.548 - 7208.960: 63.9720% ( 241) 00:07:23.168 7208.960 - 7259.372: 65.8965% ( 319) 00:07:23.168 7259.372 - 7309.785: 67.0186% ( 186) 00:07:23.168 7309.785 - 7360.197: 67.9959% ( 162) 00:07:23.168 7360.197 - 7410.609: 68.8827% ( 147) 00:07:23.168 7410.609 - 7461.022: 69.7394% ( 142) 00:07:23.168 7461.022 - 7511.434: 70.5719% ( 138) 00:07:23.168 7511.434 - 7561.846: 71.2597% ( 114) 00:07:23.168 7561.846 - 7612.258: 72.0258% ( 127) 00:07:23.168 7612.258 - 7662.671: 72.6532% ( 104) 00:07:23.168 7662.671 - 7713.083: 73.4435% ( 131) 00:07:23.168 7713.083 - 7763.495: 74.0347% ( 98) 00:07:23.168 7763.495 - 7813.908: 75.2594% ( 203) 00:07:23.168 7813.908 - 7864.320: 75.7481% ( 81) 00:07:23.168 7864.320 - 7914.732: 76.1764% ( 71) 00:07:23.168 7914.732 - 7965.145: 76.5323% ( 59) 00:07:23.168 7965.145 - 8015.557: 76.7676% ( 39) 00:07:23.168 8015.557 - 8065.969: 77.0813% ( 52) 00:07:23.168 8065.969 - 8116.382: 77.6906% ( 101) 00:07:23.168 8116.382 - 8166.794: 78.0526% ( 60) 00:07:23.168 8166.794 - 8217.206: 78.3603% ( 51) 00:07:23.168 8217.206 - 8267.618: 78.5533% ( 32) 00:07:23.168 8267.618 - 8318.031: 78.7826% ( 38) 00:07:23.168 8318.031 - 8368.443: 79.0239% ( 40) 00:07:23.168 8368.443 - 8418.855: 79.3919% ( 61) 00:07:23.168 8418.855 - 8469.268: 79.6151% ( 37) 00:07:23.168 8469.268 - 8519.680: 79.9167% ( 50) 00:07:23.168 8519.680 - 8570.092: 80.2667% ( 58) 00:07:23.168 8570.092 - 8620.505: 80.7432% ( 79) 00:07:23.168 8620.505 - 8670.917: 81.3284% ( 97) 00:07:23.168 8670.917 - 8721.329: 81.6844% ( 59) 00:07:23.168 8721.329 - 8771.742: 82.0403% ( 59) 00:07:23.168 8771.742 - 8822.154: 82.2937% ( 42) 00:07:23.168 8822.154 - 8872.566: 82.4988% ( 34) 00:07:23.168 8872.566 - 8922.978: 82.7763% ( 46) 00:07:23.168 8922.978 - 8973.391: 83.1624% ( 64) 00:07:23.168 8973.391 - 9023.803: 83.4399% ( 46) 00:07:23.168 9023.803 - 9074.215: 83.6631% ( 37) 00:07:23.168 9074.215 - 9124.628: 83.9346% ( 45) 00:07:23.168 9124.628 - 9175.040: 84.2242% ( 48) 00:07:23.168 9175.040 - 9225.452: 84.4957% ( 45) 00:07:23.168 9225.452 - 9275.865: 84.8637% ( 61) 00:07:23.168 9275.865 - 9326.277: 85.3463% ( 80) 00:07:23.168 9326.277 - 9376.689: 85.7203% ( 62) 00:07:23.168 9376.689 - 9427.102: 86.0763% ( 59) 00:07:23.168 9427.102 - 9477.514: 86.5347% ( 76) 00:07:23.168 9477.514 - 9527.926: 87.0898% ( 92) 00:07:23.168 9527.926 - 9578.338: 87.4216% ( 55) 00:07:23.168 9578.338 - 9628.751: 87.7292% ( 51) 00:07:23.168 9628.751 - 9679.163: 88.0188% ( 48) 00:07:23.168 9679.163 - 9729.575: 88.2481% ( 38) 00:07:23.168 9729.575 - 9779.988: 88.4773% ( 38) 00:07:23.168 9779.988 - 9830.400: 88.6824% ( 34) 00:07:23.168 9830.400 - 9880.812: 88.8514% ( 28) 00:07:23.168 9880.812 - 9931.225: 88.9901% ( 23) 00:07:23.168 9931.225 - 9981.637: 89.1349% ( 24) 00:07:23.168 9981.637 - 10032.049: 89.2978% ( 27) 00:07:23.168 10032.049 - 10082.462: 89.5210% ( 37) 00:07:23.168 10082.462 - 10132.874: 89.7140% ( 32) 00:07:23.168 10132.874 - 10183.286: 89.8890% ( 29) 00:07:23.168 10183.286 - 10233.698: 90.0639% ( 29) 00:07:23.168 10233.698 - 10284.111: 90.2087% ( 24) 00:07:23.168 10284.111 - 10334.523: 90.5285% ( 53) 00:07:23.168 10334.523 - 10384.935: 90.7758% ( 41) 00:07:23.168 10384.935 - 10435.348: 91.0413% ( 44) 00:07:23.168 10435.348 - 10485.760: 91.3067% ( 44) 00:07:23.168 10485.760 - 10536.172: 91.6023% ( 49) 00:07:23.168 10536.172 - 10586.585: 92.1091% ( 84) 00:07:23.168 10586.585 - 10636.997: 92.3564% ( 41) 00:07:23.168 10636.997 - 10687.409: 92.5615% ( 34) 00:07:23.168 10687.409 - 10737.822: 92.6943% ( 22) 00:07:23.168 10737.822 - 10788.234: 92.7968% ( 17) 00:07:23.168 10788.234 - 10838.646: 92.9295% ( 22) 00:07:23.168 10838.646 - 10889.058: 93.1226% ( 32) 00:07:23.168 10889.058 - 10939.471: 93.3217% ( 33) 00:07:23.168 10939.471 - 10989.883: 93.5449% ( 37) 00:07:23.168 10989.883 - 11040.295: 93.8526% ( 51) 00:07:23.168 11040.295 - 11090.708: 94.1844% ( 55) 00:07:23.168 11090.708 - 11141.120: 94.3412% ( 26) 00:07:23.168 11141.120 - 11191.532: 94.4739% ( 22) 00:07:23.168 11191.532 - 11241.945: 94.6429% ( 28) 00:07:23.168 11241.945 - 11292.357: 94.8480% ( 34) 00:07:23.168 11292.357 - 11342.769: 94.9686% ( 20) 00:07:23.168 11342.769 - 11393.182: 95.0893% ( 20) 00:07:23.168 11393.182 - 11443.594: 95.1737% ( 14) 00:07:23.168 11443.594 - 11494.006: 95.2763% ( 17) 00:07:23.168 11494.006 - 11544.418: 95.3668% ( 15) 00:07:23.168 11544.418 - 11594.831: 95.4513% ( 14) 00:07:23.168 11594.831 - 11645.243: 95.5598% ( 18) 00:07:23.168 11645.243 - 11695.655: 95.7107% ( 25) 00:07:23.168 11695.655 - 11746.068: 95.8132% ( 17) 00:07:23.168 11746.068 - 11796.480: 95.9821% ( 28) 00:07:23.168 11796.480 - 11846.892: 96.4406% ( 76) 00:07:23.168 11846.892 - 11897.305: 96.6819% ( 40) 00:07:23.168 11897.305 - 11947.717: 96.8026% ( 20) 00:07:23.168 11947.717 - 11998.129: 96.8931% ( 15) 00:07:23.168 11998.129 - 12048.542: 96.9896% ( 16) 00:07:23.168 12048.542 - 12098.954: 97.0861% ( 16) 00:07:23.168 12098.954 - 12149.366: 97.1284% ( 7) 00:07:23.168 12149.366 - 12199.778: 97.1706% ( 7) 00:07:23.168 12199.778 - 12250.191: 97.2068% ( 6) 00:07:23.168 12250.191 - 12300.603: 97.2370% ( 5) 00:07:23.168 12300.603 - 12351.015: 97.2671% ( 5) 00:07:23.168 12351.015 - 12401.428: 97.3033% ( 6) 00:07:23.168 12401.428 - 12451.840: 97.3335% ( 5) 00:07:23.168 12451.840 - 12502.252: 97.3576% ( 4) 00:07:23.168 12502.252 - 12552.665: 97.3757% ( 3) 00:07:23.168 12552.665 - 12603.077: 97.4059% ( 5) 00:07:23.168 12603.077 - 12653.489: 97.4300% ( 4) 00:07:23.168 12653.489 - 12703.902: 97.4602% ( 5) 00:07:23.168 12703.902 - 12754.314: 97.4783% ( 3) 00:07:23.168 12754.314 - 12804.726: 97.4903% ( 2) 00:07:23.168 12804.726 - 12855.138: 97.5024% ( 2) 00:07:23.168 12855.138 - 12905.551: 97.5145% ( 2) 00:07:23.168 12905.551 - 13006.375: 97.5446% ( 5) 00:07:23.168 13006.375 - 13107.200: 97.5929% ( 8) 00:07:23.168 13107.200 - 13208.025: 97.6593% ( 11) 00:07:23.168 13208.025 - 13308.849: 97.7980% ( 23) 00:07:23.168 13308.849 - 13409.674: 97.9368% ( 23) 00:07:23.168 13409.674 - 13510.498: 98.1479% ( 35) 00:07:23.168 13510.498 - 13611.323: 98.2987% ( 25) 00:07:23.168 13611.323 - 13712.148: 98.4556% ( 26) 00:07:23.168 13712.148 - 13812.972: 98.5280% ( 12) 00:07:23.168 13812.972 - 13913.797: 98.6064% ( 13) 00:07:23.168 13913.797 - 14014.622: 98.6788% ( 12) 00:07:23.168 14014.622 - 14115.446: 98.7572% ( 13) 00:07:23.168 14115.446 - 14216.271: 98.8357% ( 13) 00:07:23.168 14216.271 - 14317.095: 99.0106% ( 29) 00:07:23.168 14317.095 - 14417.920: 99.1313% ( 20) 00:07:23.168 14417.920 - 14518.745: 99.1735% ( 7) 00:07:23.168 14518.745 - 14619.569: 99.1976% ( 4) 00:07:23.168 14619.569 - 14720.394: 99.2218% ( 4) 00:07:23.168 14720.394 - 14821.218: 99.2278% ( 1) 00:07:23.168 19156.677 - 19257.502: 99.2459% ( 3) 00:07:23.168 19257.502 - 19358.326: 99.2700% ( 4) 00:07:23.168 19358.326 - 19459.151: 99.2942% ( 4) 00:07:23.168 19459.151 - 19559.975: 99.3183% ( 4) 00:07:23.168 19559.975 - 19660.800: 99.3424% ( 4) 00:07:23.168 19660.800 - 19761.625: 99.3726% ( 5) 00:07:23.168 19761.625 - 19862.449: 99.3967% ( 4) 00:07:23.168 19862.449 - 19963.274: 99.4208% ( 4) 00:07:23.169 19963.274 - 20064.098: 99.4450% ( 4) 00:07:23.169 20064.098 - 20164.923: 99.4691% ( 4) 00:07:23.169 20164.923 - 20265.748: 99.4932% ( 4) 00:07:23.169 20265.748 - 20366.572: 99.5174% ( 4) 00:07:23.169 20366.572 - 20467.397: 99.5415% ( 4) 00:07:23.169 20467.397 - 20568.222: 99.5656% ( 4) 00:07:23.169 20568.222 - 20669.046: 99.5898% ( 4) 00:07:23.169 20669.046 - 20769.871: 99.6079% ( 3) 00:07:23.169 20769.871 - 20870.695: 99.6139% ( 1) 00:07:23.169 23592.960 - 23693.785: 99.6380% ( 4) 00:07:23.169 23693.785 - 23794.609: 99.6682% ( 5) 00:07:23.169 23794.609 - 23895.434: 99.6863% ( 3) 00:07:23.169 23895.434 - 23996.258: 99.7104% ( 4) 00:07:23.169 23996.258 - 24097.083: 99.7346% ( 4) 00:07:23.169 24097.083 - 24197.908: 99.7647% ( 5) 00:07:23.169 24197.908 - 24298.732: 99.7889% ( 4) 00:07:23.169 24298.732 - 24399.557: 99.8130% ( 4) 00:07:23.169 24399.557 - 24500.382: 99.8371% ( 4) 00:07:23.169 24500.382 - 24601.206: 99.8612% ( 4) 00:07:23.169 24601.206 - 24702.031: 99.8854% ( 4) 00:07:23.169 24702.031 - 24802.855: 99.9095% ( 4) 00:07:23.169 24802.855 - 24903.680: 99.9276% ( 3) 00:07:23.169 24903.680 - 25004.505: 99.9517% ( 4) 00:07:23.169 25004.505 - 25105.329: 99.9819% ( 5) 00:07:23.169 25105.329 - 25206.154: 100.0000% ( 3) 00:07:23.169 00:07:23.169 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:23.169 ============================================================================== 00:07:23.169 Range in us Cumulative IO count 00:07:23.169 5822.622 - 5847.828: 0.0060% ( 1) 00:07:23.169 5873.034 - 5898.240: 0.0180% ( 2) 00:07:23.169 5973.858 - 5999.065: 0.0240% ( 1) 00:07:23.169 5999.065 - 6024.271: 0.0300% ( 1) 00:07:23.169 6024.271 - 6049.477: 0.0721% ( 7) 00:07:23.169 6049.477 - 6074.683: 0.1322% ( 10) 00:07:23.169 6074.683 - 6099.889: 0.2404% ( 18) 00:07:23.169 6099.889 - 6125.095: 0.4026% ( 27) 00:07:23.169 6125.095 - 6150.302: 0.6010% ( 33) 00:07:23.169 6150.302 - 6175.508: 0.9315% ( 55) 00:07:23.169 6175.508 - 6200.714: 1.2680% ( 56) 00:07:23.169 6200.714 - 6225.920: 1.8269% ( 93) 00:07:23.169 6225.920 - 6251.126: 2.2536% ( 71) 00:07:23.169 6251.126 - 6276.332: 2.7464% ( 82) 00:07:23.169 6276.332 - 6301.538: 3.2512% ( 84) 00:07:23.169 6301.538 - 6326.745: 4.0204% ( 128) 00:07:23.169 6326.745 - 6351.951: 4.8798% ( 143) 00:07:23.169 6351.951 - 6377.157: 5.8113% ( 155) 00:07:23.169 6377.157 - 6402.363: 6.7849% ( 162) 00:07:23.169 6402.363 - 6427.569: 7.9147% ( 188) 00:07:23.169 6427.569 - 6452.775: 9.2368% ( 220) 00:07:23.169 6452.775 - 6503.188: 12.3798% ( 523) 00:07:23.169 6503.188 - 6553.600: 15.5469% ( 527) 00:07:23.169 6553.600 - 6604.012: 19.5192% ( 661) 00:07:23.169 6604.012 - 6654.425: 23.2272% ( 617) 00:07:23.169 6654.425 - 6704.837: 27.4880% ( 709) 00:07:23.169 6704.837 - 6755.249: 33.0349% ( 923) 00:07:23.169 6755.249 - 6805.662: 37.2416% ( 700) 00:07:23.169 6805.662 - 6856.074: 41.5685% ( 720) 00:07:23.169 6856.074 - 6906.486: 46.6046% ( 838) 00:07:23.169 6906.486 - 6956.898: 50.9856% ( 729) 00:07:23.169 6956.898 - 7007.311: 54.4832% ( 582) 00:07:23.169 7007.311 - 7057.723: 57.6683% ( 530) 00:07:23.169 7057.723 - 7108.135: 59.9099% ( 373) 00:07:23.169 7108.135 - 7158.548: 61.9231% ( 335) 00:07:23.169 7158.548 - 7208.960: 64.0625% ( 356) 00:07:23.169 7208.960 - 7259.372: 65.2524% ( 198) 00:07:23.169 7259.372 - 7309.785: 66.1599% ( 151) 00:07:23.169 7309.785 - 7360.197: 67.3738% ( 202) 00:07:23.169 7360.197 - 7410.609: 68.6118% ( 206) 00:07:23.169 7410.609 - 7461.022: 69.5192% ( 151) 00:07:23.169 7461.022 - 7511.434: 70.4087% ( 148) 00:07:23.169 7511.434 - 7561.846: 71.0757% ( 111) 00:07:23.169 7561.846 - 7612.258: 71.8149% ( 123) 00:07:23.169 7612.258 - 7662.671: 72.7224% ( 151) 00:07:23.169 7662.671 - 7713.083: 73.3834% ( 110) 00:07:23.169 7713.083 - 7763.495: 73.9844% ( 100) 00:07:23.169 7763.495 - 7813.908: 74.6635% ( 113) 00:07:23.169 7813.908 - 7864.320: 75.1683% ( 84) 00:07:23.169 7864.320 - 7914.732: 75.5589% ( 65) 00:07:23.169 7914.732 - 7965.145: 76.2380% ( 113) 00:07:23.169 7965.145 - 8015.557: 76.8329% ( 99) 00:07:23.169 8015.557 - 8065.969: 77.2716% ( 73) 00:07:23.169 8065.969 - 8116.382: 77.4820% ( 35) 00:07:23.169 8116.382 - 8166.794: 77.8606% ( 63) 00:07:23.169 8166.794 - 8217.206: 78.1190% ( 43) 00:07:23.169 8217.206 - 8267.618: 78.4916% ( 62) 00:07:23.169 8267.618 - 8318.031: 78.8762% ( 64) 00:07:23.169 8318.031 - 8368.443: 79.1526% ( 46) 00:07:23.169 8368.443 - 8418.855: 79.3570% ( 34) 00:07:23.169 8418.855 - 8469.268: 79.6635% ( 51) 00:07:23.169 8469.268 - 8519.680: 79.7837% ( 20) 00:07:23.169 8519.680 - 8570.092: 79.9820% ( 33) 00:07:23.169 8570.092 - 8620.505: 80.2644% ( 47) 00:07:23.169 8620.505 - 8670.917: 80.6851% ( 70) 00:07:23.169 8670.917 - 8721.329: 81.1599% ( 79) 00:07:23.169 8721.329 - 8771.742: 81.6106% ( 75) 00:07:23.169 8771.742 - 8822.154: 81.9832% ( 62) 00:07:23.169 8822.154 - 8872.566: 82.4700% ( 81) 00:07:23.169 8872.566 - 8922.978: 82.8305% ( 60) 00:07:23.169 8922.978 - 8973.391: 83.2452% ( 69) 00:07:23.169 8973.391 - 9023.803: 83.8762% ( 105) 00:07:23.169 9023.803 - 9074.215: 84.4712% ( 99) 00:07:23.169 9074.215 - 9124.628: 84.8738% ( 67) 00:07:23.169 9124.628 - 9175.040: 85.2464% ( 62) 00:07:23.169 9175.040 - 9225.452: 85.5950% ( 58) 00:07:23.169 9225.452 - 9275.865: 85.9315% ( 56) 00:07:23.169 9275.865 - 9326.277: 86.1478% ( 36) 00:07:23.169 9326.277 - 9376.689: 86.3101% ( 27) 00:07:23.169 9376.689 - 9427.102: 86.4603% ( 25) 00:07:23.169 9427.102 - 9477.514: 86.5986% ( 23) 00:07:23.169 9477.514 - 9527.926: 86.7849% ( 31) 00:07:23.169 9527.926 - 9578.338: 87.0974% ( 52) 00:07:23.169 9578.338 - 9628.751: 87.2236% ( 21) 00:07:23.169 9628.751 - 9679.163: 87.3918% ( 28) 00:07:23.169 9679.163 - 9729.575: 87.6562% ( 44) 00:07:23.169 9729.575 - 9779.988: 88.1130% ( 76) 00:07:23.169 9779.988 - 9830.400: 88.3413% ( 38) 00:07:23.169 9830.400 - 9880.812: 88.5276% ( 31) 00:07:23.169 9880.812 - 9931.225: 88.7680% ( 40) 00:07:23.169 9931.225 - 9981.637: 89.0505% ( 47) 00:07:23.169 9981.637 - 10032.049: 89.3510% ( 50) 00:07:23.169 10032.049 - 10082.462: 89.7536% ( 67) 00:07:23.169 10082.462 - 10132.874: 90.0000% ( 41) 00:07:23.169 10132.874 - 10183.286: 90.1743% ( 29) 00:07:23.169 10183.286 - 10233.698: 90.3546% ( 30) 00:07:23.169 10233.698 - 10284.111: 90.5288% ( 29) 00:07:23.169 10284.111 - 10334.523: 90.7392% ( 35) 00:07:23.169 10334.523 - 10384.935: 90.9375% ( 33) 00:07:23.169 10384.935 - 10435.348: 91.1178% ( 30) 00:07:23.169 10435.348 - 10485.760: 91.3341% ( 36) 00:07:23.169 10485.760 - 10536.172: 91.6526% ( 53) 00:07:23.169 10536.172 - 10586.585: 91.9591% ( 51) 00:07:23.169 10586.585 - 10636.997: 92.2356% ( 46) 00:07:23.169 10636.997 - 10687.409: 92.4700% ( 39) 00:07:23.169 10687.409 - 10737.822: 92.6683% ( 33) 00:07:23.169 10737.822 - 10788.234: 92.8245% ( 26) 00:07:23.169 10788.234 - 10838.646: 93.1070% ( 47) 00:07:23.169 10838.646 - 10889.058: 93.3954% ( 48) 00:07:23.169 10889.058 - 10939.471: 93.6298% ( 39) 00:07:23.169 10939.471 - 10989.883: 93.9062% ( 46) 00:07:23.169 10989.883 - 11040.295: 94.1767% ( 45) 00:07:23.169 11040.295 - 11090.708: 94.4050% ( 38) 00:07:23.169 11090.708 - 11141.120: 94.5853% ( 30) 00:07:23.169 11141.120 - 11191.532: 94.9880% ( 67) 00:07:23.169 11191.532 - 11241.945: 95.1743% ( 31) 00:07:23.169 11241.945 - 11292.357: 95.3786% ( 34) 00:07:23.169 11292.357 - 11342.769: 95.5228% ( 24) 00:07:23.169 11342.769 - 11393.182: 95.6310% ( 18) 00:07:23.169 11393.182 - 11443.594: 95.8053% ( 29) 00:07:23.169 11443.594 - 11494.006: 95.9856% ( 30) 00:07:23.169 11494.006 - 11544.418: 96.1178% ( 22) 00:07:23.169 11544.418 - 11594.831: 96.2019% ( 14) 00:07:23.169 11594.831 - 11645.243: 96.2861% ( 14) 00:07:23.169 11645.243 - 11695.655: 96.3642% ( 13) 00:07:23.169 11695.655 - 11746.068: 96.4243% ( 10) 00:07:23.169 11746.068 - 11796.480: 96.4964% ( 12) 00:07:23.169 11796.480 - 11846.892: 96.5685% ( 12) 00:07:23.169 11846.892 - 11897.305: 96.6707% ( 17) 00:07:23.169 11897.305 - 11947.717: 96.7488% ( 13) 00:07:23.169 11947.717 - 11998.129: 96.8029% ( 9) 00:07:23.169 11998.129 - 12048.542: 96.8510% ( 8) 00:07:23.169 12048.542 - 12098.954: 96.8990% ( 8) 00:07:23.169 12098.954 - 12149.366: 96.9531% ( 9) 00:07:23.169 12149.366 - 12199.778: 97.0072% ( 9) 00:07:23.169 12199.778 - 12250.191: 97.0673% ( 10) 00:07:23.169 12250.191 - 12300.603: 97.1094% ( 7) 00:07:23.169 12300.603 - 12351.015: 97.1394% ( 5) 00:07:23.169 12351.015 - 12401.428: 97.1635% ( 4) 00:07:23.169 12401.428 - 12451.840: 97.2055% ( 7) 00:07:23.169 12451.840 - 12502.252: 97.2536% ( 8) 00:07:23.169 12502.252 - 12552.665: 97.2716% ( 3) 00:07:23.169 12552.665 - 12603.077: 97.3197% ( 8) 00:07:23.169 12603.077 - 12653.489: 97.3558% ( 6) 00:07:23.169 12653.489 - 12703.902: 97.4038% ( 8) 00:07:23.169 12703.902 - 12754.314: 97.4700% ( 11) 00:07:23.169 12754.314 - 12804.726: 97.5421% ( 12) 00:07:23.169 12804.726 - 12855.138: 97.5781% ( 6) 00:07:23.169 12855.138 - 12905.551: 97.5962% ( 3) 00:07:23.169 12905.551 - 13006.375: 97.6502% ( 9) 00:07:23.169 13006.375 - 13107.200: 97.7103% ( 10) 00:07:23.169 13107.200 - 13208.025: 97.7885% ( 13) 00:07:23.169 13208.025 - 13308.849: 97.8846% ( 16) 00:07:23.169 13308.849 - 13409.674: 98.1430% ( 43) 00:07:23.169 13409.674 - 13510.498: 98.2392% ( 16) 00:07:23.169 13510.498 - 13611.323: 98.3233% ( 14) 00:07:23.169 13611.323 - 13712.148: 98.3954% ( 12) 00:07:23.169 13712.148 - 13812.972: 98.4615% ( 11) 00:07:23.169 13812.972 - 13913.797: 98.5276% ( 11) 00:07:23.169 13913.797 - 14014.622: 98.6058% ( 13) 00:07:23.169 14014.622 - 14115.446: 98.6839% ( 13) 00:07:23.169 14115.446 - 14216.271: 98.8522% ( 28) 00:07:23.169 14216.271 - 14317.095: 98.9724% ( 20) 00:07:23.169 14317.095 - 14417.920: 99.0565% ( 14) 00:07:23.169 14417.920 - 14518.745: 99.1286% ( 12) 00:07:23.169 14518.745 - 14619.569: 99.1887% ( 10) 00:07:23.170 14619.569 - 14720.394: 99.2428% ( 9) 00:07:23.170 14720.394 - 14821.218: 99.3029% ( 10) 00:07:23.170 14821.218 - 14922.043: 99.3690% ( 11) 00:07:23.170 14922.043 - 15022.868: 99.4291% ( 10) 00:07:23.170 15022.868 - 15123.692: 99.4772% ( 8) 00:07:23.170 15123.692 - 15224.517: 99.5132% ( 6) 00:07:23.170 15224.517 - 15325.342: 99.5373% ( 4) 00:07:23.170 15325.342 - 15426.166: 99.5673% ( 5) 00:07:23.170 15426.166 - 15526.991: 99.5913% ( 4) 00:07:23.170 15526.991 - 15627.815: 99.6154% ( 4) 00:07:23.170 18450.905 - 18551.729: 99.6214% ( 1) 00:07:23.170 18551.729 - 18652.554: 99.6454% ( 4) 00:07:23.170 18652.554 - 18753.378: 99.6695% ( 4) 00:07:23.170 18753.378 - 18854.203: 99.6875% ( 3) 00:07:23.170 18854.203 - 18955.028: 99.7115% ( 4) 00:07:23.170 18955.028 - 19055.852: 99.7356% ( 4) 00:07:23.170 19055.852 - 19156.677: 99.7596% ( 4) 00:07:23.170 19156.677 - 19257.502: 99.7776% ( 3) 00:07:23.170 19257.502 - 19358.326: 99.8017% ( 4) 00:07:23.170 19358.326 - 19459.151: 99.8257% ( 4) 00:07:23.170 19459.151 - 19559.975: 99.8498% ( 4) 00:07:23.170 19559.975 - 19660.800: 99.8738% ( 4) 00:07:23.170 19660.800 - 19761.625: 99.8978% ( 4) 00:07:23.170 19761.625 - 19862.449: 99.9219% ( 4) 00:07:23.170 19862.449 - 19963.274: 99.9459% ( 4) 00:07:23.170 19963.274 - 20064.098: 99.9700% ( 4) 00:07:23.170 20064.098 - 20164.923: 99.9940% ( 4) 00:07:23.170 20164.923 - 20265.748: 100.0000% ( 1) 00:07:23.170 00:07:23.170 20:18:07 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:07:23.170 00:07:23.170 real 0m2.503s 00:07:23.170 user 0m2.210s 00:07:23.170 sys 0m0.192s 00:07:23.170 20:18:07 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:23.170 20:18:07 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:07:23.170 ************************************ 00:07:23.170 END TEST nvme_perf 00:07:23.170 ************************************ 00:07:23.170 20:18:07 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:23.170 20:18:07 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:23.170 20:18:07 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:23.170 20:18:07 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:23.170 ************************************ 00:07:23.170 START TEST nvme_hello_world 00:07:23.170 ************************************ 00:07:23.170 20:18:07 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:23.170 Initializing NVMe Controllers 00:07:23.170 Attached to 0000:00:13.0 00:07:23.170 Namespace ID: 1 size: 1GB 00:07:23.170 Attached to 0000:00:10.0 00:07:23.170 Namespace ID: 1 size: 6GB 00:07:23.170 Attached to 0000:00:11.0 00:07:23.170 Namespace ID: 1 size: 5GB 00:07:23.170 Attached to 0000:00:12.0 00:07:23.170 Namespace ID: 1 size: 4GB 00:07:23.170 Namespace ID: 2 size: 4GB 00:07:23.170 Namespace ID: 3 size: 4GB 00:07:23.170 Initialization complete. 00:07:23.170 INFO: using host memory buffer for IO 00:07:23.170 Hello world! 00:07:23.170 INFO: using host memory buffer for IO 00:07:23.170 Hello world! 00:07:23.170 INFO: using host memory buffer for IO 00:07:23.170 Hello world! 00:07:23.170 INFO: using host memory buffer for IO 00:07:23.170 Hello world! 00:07:23.170 INFO: using host memory buffer for IO 00:07:23.170 Hello world! 00:07:23.170 INFO: using host memory buffer for IO 00:07:23.170 Hello world! 00:07:23.170 00:07:23.170 real 0m0.218s 00:07:23.170 user 0m0.092s 00:07:23.170 sys 0m0.084s 00:07:23.170 20:18:07 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:23.170 20:18:07 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:23.170 ************************************ 00:07:23.170 END TEST nvme_hello_world 00:07:23.170 ************************************ 00:07:23.170 20:18:07 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:23.170 20:18:07 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:23.170 20:18:07 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:23.170 20:18:07 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:23.170 ************************************ 00:07:23.170 START TEST nvme_sgl 00:07:23.170 ************************************ 00:07:23.170 20:18:07 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:23.428 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:07:23.428 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:07:23.428 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:07:23.428 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:07:23.428 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:07:23.428 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:07:23.428 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:07:23.428 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:07:23.428 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:07:23.428 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:07:23.428 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:07:23.428 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:07:23.428 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:07:23.428 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:07:23.428 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:07:23.428 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:07:23.428 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:07:23.428 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:07:23.428 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:07:23.428 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:07:23.428 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:07:23.428 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:07:23.428 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:07:23.428 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:07:23.428 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:07:23.428 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:07:23.428 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:07:23.428 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:07:23.428 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:07:23.428 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:07:23.428 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:07:23.428 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:07:23.428 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:07:23.428 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:07:23.428 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:07:23.428 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:07:23.428 NVMe Readv/Writev Request test 00:07:23.428 Attached to 0000:00:13.0 00:07:23.428 Attached to 0000:00:10.0 00:07:23.428 Attached to 0000:00:11.0 00:07:23.428 Attached to 0000:00:12.0 00:07:23.428 0000:00:10.0: build_io_request_2 test passed 00:07:23.428 0000:00:10.0: build_io_request_4 test passed 00:07:23.428 0000:00:10.0: build_io_request_5 test passed 00:07:23.428 0000:00:10.0: build_io_request_6 test passed 00:07:23.428 0000:00:10.0: build_io_request_7 test passed 00:07:23.428 0000:00:10.0: build_io_request_10 test passed 00:07:23.428 0000:00:11.0: build_io_request_2 test passed 00:07:23.428 0000:00:11.0: build_io_request_4 test passed 00:07:23.428 0000:00:11.0: build_io_request_5 test passed 00:07:23.428 0000:00:11.0: build_io_request_6 test passed 00:07:23.428 0000:00:11.0: build_io_request_7 test passed 00:07:23.428 0000:00:11.0: build_io_request_10 test passed 00:07:23.428 Cleaning up... 00:07:23.686 00:07:23.686 real 0m0.284s 00:07:23.686 user 0m0.148s 00:07:23.686 sys 0m0.090s 00:07:23.686 20:18:07 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:23.686 20:18:07 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:07:23.686 ************************************ 00:07:23.686 END TEST nvme_sgl 00:07:23.686 ************************************ 00:07:23.686 20:18:07 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:23.686 20:18:07 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:23.686 20:18:07 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:23.686 20:18:07 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:23.686 ************************************ 00:07:23.686 START TEST nvme_e2edp 00:07:23.686 ************************************ 00:07:23.686 20:18:07 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:23.686 NVMe Write/Read with End-to-End data protection test 00:07:23.686 Attached to 0000:00:13.0 00:07:23.686 Attached to 0000:00:10.0 00:07:23.686 Attached to 0000:00:11.0 00:07:23.686 Attached to 0000:00:12.0 00:07:23.686 Cleaning up... 00:07:23.686 00:07:23.686 real 0m0.193s 00:07:23.686 user 0m0.066s 00:07:23.686 sys 0m0.086s 00:07:23.686 20:18:07 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:23.686 20:18:07 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:07:23.686 ************************************ 00:07:23.686 END TEST nvme_e2edp 00:07:23.686 ************************************ 00:07:23.944 20:18:07 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:23.944 20:18:07 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:23.944 20:18:07 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:23.944 20:18:07 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:23.944 ************************************ 00:07:23.944 START TEST nvme_reserve 00:07:23.944 ************************************ 00:07:23.944 20:18:07 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:23.944 ===================================================== 00:07:23.944 NVMe Controller at PCI bus 0, device 19, function 0 00:07:23.944 ===================================================== 00:07:23.944 Reservations: Not Supported 00:07:23.944 ===================================================== 00:07:23.944 NVMe Controller at PCI bus 0, device 16, function 0 00:07:23.944 ===================================================== 00:07:23.944 Reservations: Not Supported 00:07:23.944 ===================================================== 00:07:23.944 NVMe Controller at PCI bus 0, device 17, function 0 00:07:23.944 ===================================================== 00:07:23.944 Reservations: Not Supported 00:07:23.944 ===================================================== 00:07:23.944 NVMe Controller at PCI bus 0, device 18, function 0 00:07:23.944 ===================================================== 00:07:23.944 Reservations: Not Supported 00:07:23.944 Reservation test passed 00:07:23.944 00:07:23.944 real 0m0.220s 00:07:23.944 user 0m0.077s 00:07:23.944 sys 0m0.089s 00:07:23.944 20:18:08 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:23.944 20:18:08 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:07:23.944 ************************************ 00:07:23.944 END TEST nvme_reserve 00:07:23.944 ************************************ 00:07:24.202 20:18:08 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:24.202 20:18:08 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:24.202 20:18:08 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:24.202 20:18:08 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:24.202 ************************************ 00:07:24.202 START TEST nvme_err_injection 00:07:24.202 ************************************ 00:07:24.202 20:18:08 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:24.202 NVMe Error Injection test 00:07:24.202 Attached to 0000:00:13.0 00:07:24.202 Attached to 0000:00:10.0 00:07:24.202 Attached to 0000:00:11.0 00:07:24.202 Attached to 0000:00:12.0 00:07:24.202 0000:00:13.0: get features failed as expected 00:07:24.202 0000:00:10.0: get features failed as expected 00:07:24.202 0000:00:11.0: get features failed as expected 00:07:24.202 0000:00:12.0: get features failed as expected 00:07:24.202 0000:00:13.0: get features successfully as expected 00:07:24.202 0000:00:10.0: get features successfully as expected 00:07:24.202 0000:00:11.0: get features successfully as expected 00:07:24.202 0000:00:12.0: get features successfully as expected 00:07:24.202 0000:00:13.0: read failed as expected 00:07:24.202 0000:00:10.0: read failed as expected 00:07:24.202 0000:00:11.0: read failed as expected 00:07:24.202 0000:00:12.0: read failed as expected 00:07:24.202 0000:00:13.0: read successfully as expected 00:07:24.202 0000:00:10.0: read successfully as expected 00:07:24.202 0000:00:11.0: read successfully as expected 00:07:24.202 0000:00:12.0: read successfully as expected 00:07:24.202 Cleaning up... 00:07:24.202 00:07:24.202 real 0m0.214s 00:07:24.202 user 0m0.084s 00:07:24.202 sys 0m0.091s 00:07:24.202 20:18:08 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:24.202 20:18:08 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:07:24.202 ************************************ 00:07:24.202 END TEST nvme_err_injection 00:07:24.202 ************************************ 00:07:24.460 20:18:08 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:24.460 20:18:08 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:07:24.460 20:18:08 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:24.460 20:18:08 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:24.460 ************************************ 00:07:24.460 START TEST nvme_overhead 00:07:24.460 ************************************ 00:07:24.460 20:18:08 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:25.392 Initializing NVMe Controllers 00:07:25.392 Attached to 0000:00:13.0 00:07:25.392 Attached to 0000:00:10.0 00:07:25.392 Attached to 0000:00:11.0 00:07:25.392 Attached to 0000:00:12.0 00:07:25.392 Initialization complete. Launching workers. 00:07:25.392 submit (in ns) avg, min, max = 11374.3, 10349.2, 325104.6 00:07:25.392 complete (in ns) avg, min, max = 7637.1, 7262.3, 149220.8 00:07:25.392 00:07:25.392 Submit histogram 00:07:25.392 ================ 00:07:25.392 Range in us Cumulative Count 00:07:25.392 10.338 - 10.388: 0.0057% ( 1) 00:07:25.392 10.437 - 10.486: 0.0113% ( 1) 00:07:25.392 10.535 - 10.585: 0.0170% ( 1) 00:07:25.392 10.782 - 10.831: 0.0511% ( 6) 00:07:25.392 10.831 - 10.880: 0.4992% ( 79) 00:07:25.392 10.880 - 10.929: 2.7343% ( 394) 00:07:25.392 10.929 - 10.978: 8.9573% ( 1097) 00:07:25.392 10.978 - 11.028: 20.8192% ( 2091) 00:07:25.392 11.028 - 11.077: 37.1455% ( 2878) 00:07:25.392 11.077 - 11.126: 52.2578% ( 2664) 00:07:25.392 11.126 - 11.175: 64.3465% ( 2131) 00:07:25.392 11.175 - 11.225: 72.5607% ( 1448) 00:07:25.392 11.225 - 11.274: 77.5641% ( 882) 00:07:25.392 11.274 - 11.323: 80.3665% ( 494) 00:07:25.392 11.323 - 11.372: 82.3292% ( 346) 00:07:25.392 11.372 - 11.422: 83.6907% ( 240) 00:07:25.392 11.422 - 11.471: 84.5984% ( 160) 00:07:25.392 11.471 - 11.520: 85.4550% ( 151) 00:07:25.392 11.520 - 11.569: 86.2208% ( 135) 00:07:25.392 11.569 - 11.618: 86.9582% ( 130) 00:07:25.392 11.618 - 11.668: 87.7014% ( 131) 00:07:25.392 11.668 - 11.717: 88.3991% ( 123) 00:07:25.392 11.717 - 11.766: 89.2444% ( 149) 00:07:25.392 11.766 - 11.815: 90.1237% ( 155) 00:07:25.392 11.815 - 11.865: 90.8895% ( 135) 00:07:25.392 11.865 - 11.914: 91.4795% ( 104) 00:07:25.392 11.914 - 11.963: 92.0865% ( 107) 00:07:25.392 11.963 - 12.012: 92.7445% ( 116) 00:07:25.392 12.012 - 12.062: 93.5671% ( 145) 00:07:25.392 12.062 - 12.111: 94.2762% ( 125) 00:07:25.392 12.111 - 12.160: 94.8945% ( 109) 00:07:25.392 12.160 - 12.209: 95.4618% ( 100) 00:07:25.392 12.209 - 12.258: 96.0120% ( 97) 00:07:25.392 12.258 - 12.308: 96.3637% ( 62) 00:07:25.392 12.308 - 12.357: 96.5850% ( 39) 00:07:25.392 12.357 - 12.406: 96.7211% ( 24) 00:07:25.392 12.406 - 12.455: 96.8062% ( 15) 00:07:25.392 12.455 - 12.505: 96.9083% ( 18) 00:07:25.392 12.505 - 12.554: 96.9821% ( 13) 00:07:25.392 12.554 - 12.603: 97.0275% ( 8) 00:07:25.392 12.603 - 12.702: 97.0785% ( 9) 00:07:25.392 12.702 - 12.800: 97.1239% ( 8) 00:07:25.392 12.800 - 12.898: 97.1636% ( 7) 00:07:25.392 12.898 - 12.997: 97.1976% ( 6) 00:07:25.392 12.997 - 13.095: 97.2317% ( 6) 00:07:25.392 13.095 - 13.194: 97.3111% ( 14) 00:07:25.392 13.194 - 13.292: 97.4132% ( 18) 00:07:25.392 13.292 - 13.391: 97.5494% ( 24) 00:07:25.392 13.391 - 13.489: 97.6401% ( 16) 00:07:25.392 13.489 - 13.588: 97.7309% ( 16) 00:07:25.392 13.588 - 13.686: 97.8160% ( 15) 00:07:25.392 13.686 - 13.785: 97.9011% ( 15) 00:07:25.392 13.785 - 13.883: 97.9408% ( 7) 00:07:25.392 13.883 - 13.982: 97.9635% ( 4) 00:07:25.392 13.982 - 14.080: 98.0088% ( 8) 00:07:25.392 14.080 - 14.178: 98.0429% ( 6) 00:07:25.392 14.178 - 14.277: 98.0486% ( 1) 00:07:25.392 14.277 - 14.375: 98.0939% ( 8) 00:07:25.392 14.375 - 14.474: 98.1507% ( 10) 00:07:25.392 14.474 - 14.572: 98.1734% ( 4) 00:07:25.392 14.572 - 14.671: 98.2074% ( 6) 00:07:25.392 14.671 - 14.769: 98.2414% ( 6) 00:07:25.392 14.769 - 14.868: 98.2528% ( 2) 00:07:25.392 14.868 - 14.966: 98.2755% ( 4) 00:07:25.392 15.065 - 15.163: 98.3492% ( 13) 00:07:25.392 15.163 - 15.262: 98.3776% ( 5) 00:07:25.392 15.262 - 15.360: 98.4116% ( 6) 00:07:25.392 15.360 - 15.458: 98.4570% ( 8) 00:07:25.392 15.458 - 15.557: 98.5024% ( 8) 00:07:25.392 15.557 - 15.655: 98.5307% ( 5) 00:07:25.392 15.655 - 15.754: 98.5478% ( 3) 00:07:25.392 15.754 - 15.852: 98.5705% ( 4) 00:07:25.392 15.852 - 15.951: 98.5875% ( 3) 00:07:25.392 15.951 - 16.049: 98.6045% ( 3) 00:07:25.392 16.049 - 16.148: 98.6272% ( 4) 00:07:25.392 16.148 - 16.246: 98.6329% ( 1) 00:07:25.392 16.246 - 16.345: 98.6385% ( 1) 00:07:25.392 16.345 - 16.443: 98.6499% ( 2) 00:07:25.392 16.443 - 16.542: 98.7009% ( 9) 00:07:25.650 16.542 - 16.640: 98.7406% ( 7) 00:07:25.650 16.640 - 16.738: 98.8428% ( 18) 00:07:25.650 16.738 - 16.837: 98.9052% ( 11) 00:07:25.650 16.837 - 16.935: 98.9619% ( 10) 00:07:25.650 16.935 - 17.034: 99.0300% ( 12) 00:07:25.650 17.034 - 17.132: 99.1207% ( 16) 00:07:25.650 17.132 - 17.231: 99.2172% ( 17) 00:07:25.650 17.231 - 17.329: 99.2739% ( 10) 00:07:25.650 17.329 - 17.428: 99.3249% ( 9) 00:07:25.650 17.428 - 17.526: 99.3646% ( 7) 00:07:25.650 17.526 - 17.625: 99.4327% ( 12) 00:07:25.650 17.625 - 17.723: 99.4611% ( 5) 00:07:25.650 17.723 - 17.822: 99.4951% ( 6) 00:07:25.650 17.822 - 17.920: 99.5348% ( 7) 00:07:25.650 17.920 - 18.018: 99.5689% ( 6) 00:07:25.650 18.018 - 18.117: 99.6143% ( 8) 00:07:25.650 18.215 - 18.314: 99.6369% ( 4) 00:07:25.650 18.314 - 18.412: 99.6596% ( 4) 00:07:25.650 18.412 - 18.511: 99.6823% ( 4) 00:07:25.650 18.511 - 18.609: 99.6880% ( 1) 00:07:25.650 18.609 - 18.708: 99.6993% ( 2) 00:07:25.650 18.708 - 18.806: 99.7107% ( 2) 00:07:25.650 18.806 - 18.905: 99.7164% ( 1) 00:07:25.650 18.905 - 19.003: 99.7277% ( 2) 00:07:25.650 19.003 - 19.102: 99.7334% ( 1) 00:07:25.650 19.298 - 19.397: 99.7391% ( 1) 00:07:25.650 19.495 - 19.594: 99.7447% ( 1) 00:07:25.650 19.594 - 19.692: 99.7674% ( 4) 00:07:25.650 19.692 - 19.791: 99.7844% ( 3) 00:07:25.650 19.889 - 19.988: 99.7901% ( 1) 00:07:25.650 19.988 - 20.086: 99.7958% ( 1) 00:07:25.650 20.480 - 20.578: 99.8015% ( 1) 00:07:25.650 20.578 - 20.677: 99.8071% ( 1) 00:07:25.650 20.775 - 20.874: 99.8185% ( 2) 00:07:25.650 20.874 - 20.972: 99.8241% ( 1) 00:07:25.650 21.268 - 21.366: 99.8468% ( 4) 00:07:25.650 21.465 - 21.563: 99.8582% ( 2) 00:07:25.650 21.858 - 21.957: 99.8695% ( 2) 00:07:25.650 22.154 - 22.252: 99.8752% ( 1) 00:07:25.650 22.252 - 22.351: 99.8809% ( 1) 00:07:25.650 22.351 - 22.449: 99.8922% ( 2) 00:07:25.650 22.745 - 22.843: 99.9036% ( 2) 00:07:25.650 23.335 - 23.434: 99.9092% ( 1) 00:07:25.650 24.025 - 24.123: 99.9206% ( 2) 00:07:25.650 24.222 - 24.320: 99.9263% ( 1) 00:07:25.650 24.320 - 24.418: 99.9319% ( 1) 00:07:25.650 25.994 - 26.191: 99.9376% ( 1) 00:07:25.650 28.357 - 28.554: 99.9433% ( 1) 00:07:25.650 29.342 - 29.538: 99.9489% ( 1) 00:07:25.650 34.068 - 34.265: 99.9546% ( 1) 00:07:25.650 34.462 - 34.658: 99.9603% ( 1) 00:07:25.650 35.249 - 35.446: 99.9660% ( 1) 00:07:25.650 38.597 - 38.794: 99.9716% ( 1) 00:07:25.650 39.975 - 40.172: 99.9773% ( 1) 00:07:25.650 46.474 - 46.671: 99.9830% ( 1) 00:07:25.650 53.563 - 53.957: 99.9887% ( 1) 00:07:25.650 66.954 - 67.348: 99.9943% ( 1) 00:07:25.650 324.529 - 326.105: 100.0000% ( 1) 00:07:25.650 00:07:25.650 Complete histogram 00:07:25.650 ================== 00:07:25.650 Range in us Cumulative Count 00:07:25.650 7.237 - 7.286: 0.1191% ( 21) 00:07:25.651 7.286 - 7.335: 1.9061% ( 315) 00:07:25.651 7.335 - 7.385: 11.4704% ( 1686) 00:07:25.651 7.385 - 7.434: 32.2952% ( 3671) 00:07:25.651 7.434 - 7.483: 56.5464% ( 4275) 00:07:25.651 7.483 - 7.532: 74.9149% ( 3238) 00:07:25.651 7.532 - 7.582: 85.2564% ( 1823) 00:07:25.651 7.582 - 7.631: 90.6342% ( 948) 00:07:25.651 7.631 - 7.680: 93.7826% ( 555) 00:07:25.651 7.680 - 7.729: 95.1498% ( 241) 00:07:25.651 7.729 - 7.778: 95.9496% ( 141) 00:07:25.651 7.778 - 7.828: 96.3808% ( 76) 00:07:25.651 7.828 - 7.877: 96.5509% ( 30) 00:07:25.651 7.877 - 7.926: 96.6531% ( 18) 00:07:25.651 7.926 - 7.975: 96.7325% ( 14) 00:07:25.651 7.975 - 8.025: 96.7779% ( 8) 00:07:25.651 8.025 - 8.074: 96.8062% ( 5) 00:07:25.651 8.074 - 8.123: 96.8232% ( 3) 00:07:25.651 8.123 - 8.172: 96.8970% ( 13) 00:07:25.651 8.172 - 8.222: 96.9764% ( 14) 00:07:25.651 8.222 - 8.271: 97.0672% ( 16) 00:07:25.651 8.271 - 8.320: 97.1920% ( 22) 00:07:25.651 8.320 - 8.369: 97.2941% ( 18) 00:07:25.651 8.369 - 8.418: 97.4756% ( 32) 00:07:25.651 8.418 - 8.468: 97.5947% ( 21) 00:07:25.651 8.468 - 8.517: 97.7479% ( 27) 00:07:25.651 8.517 - 8.566: 97.8160% ( 12) 00:07:25.651 8.566 - 8.615: 97.8443% ( 5) 00:07:25.651 8.615 - 8.665: 97.8670% ( 4) 00:07:25.651 8.665 - 8.714: 97.8954% ( 5) 00:07:25.651 8.714 - 8.763: 97.9238% ( 5) 00:07:25.651 8.763 - 8.812: 97.9408% ( 3) 00:07:25.651 8.812 - 8.862: 97.9464% ( 1) 00:07:25.651 8.911 - 8.960: 97.9521% ( 1) 00:07:25.651 8.960 - 9.009: 97.9578% ( 1) 00:07:25.651 9.108 - 9.157: 97.9635% ( 1) 00:07:25.651 9.255 - 9.305: 97.9691% ( 1) 00:07:25.651 9.452 - 9.502: 97.9748% ( 1) 00:07:25.651 9.502 - 9.551: 97.9862% ( 2) 00:07:25.651 9.600 - 9.649: 97.9975% ( 2) 00:07:25.651 9.649 - 9.698: 98.0259% ( 5) 00:07:25.651 9.748 - 9.797: 98.0429% ( 3) 00:07:25.651 9.797 - 9.846: 98.0542% ( 2) 00:07:25.651 9.846 - 9.895: 98.0769% ( 4) 00:07:25.651 9.895 - 9.945: 98.0826% ( 1) 00:07:25.651 9.945 - 9.994: 98.0883% ( 1) 00:07:25.651 9.994 - 10.043: 98.0939% ( 1) 00:07:25.651 10.043 - 10.092: 98.1110% ( 3) 00:07:25.651 10.092 - 10.142: 98.1223% ( 2) 00:07:25.651 10.142 - 10.191: 98.1337% ( 2) 00:07:25.651 10.191 - 10.240: 98.1507% ( 3) 00:07:25.651 10.240 - 10.289: 98.1563% ( 1) 00:07:25.651 10.289 - 10.338: 98.1734% ( 3) 00:07:25.651 10.338 - 10.388: 98.1790% ( 1) 00:07:25.651 10.388 - 10.437: 98.1847% ( 1) 00:07:25.651 10.437 - 10.486: 98.2074% ( 4) 00:07:25.651 10.486 - 10.535: 98.2187% ( 2) 00:07:25.651 10.535 - 10.585: 98.2244% ( 1) 00:07:25.651 10.585 - 10.634: 98.2358% ( 2) 00:07:25.651 10.634 - 10.683: 98.2414% ( 1) 00:07:25.651 10.683 - 10.732: 98.2698% ( 5) 00:07:25.651 10.732 - 10.782: 98.2811% ( 2) 00:07:25.651 10.782 - 10.831: 98.2925% ( 2) 00:07:25.651 10.831 - 10.880: 98.3095% ( 3) 00:07:25.651 11.028 - 11.077: 98.3152% ( 1) 00:07:25.651 11.077 - 11.126: 98.3209% ( 1) 00:07:25.651 11.126 - 11.175: 98.3265% ( 1) 00:07:25.651 11.175 - 11.225: 98.3435% ( 3) 00:07:25.651 11.225 - 11.274: 98.3492% ( 1) 00:07:25.651 11.274 - 11.323: 98.3549% ( 1) 00:07:25.651 11.422 - 11.471: 98.3606% ( 1) 00:07:25.651 11.471 - 11.520: 98.3662% ( 1) 00:07:25.651 11.569 - 11.618: 98.3776% ( 2) 00:07:25.651 11.618 - 11.668: 98.3833% ( 1) 00:07:25.651 11.717 - 11.766: 98.3889% ( 1) 00:07:25.651 11.865 - 11.914: 98.3946% ( 1) 00:07:25.651 11.914 - 11.963: 98.4003% ( 1) 00:07:25.651 12.062 - 12.111: 98.4059% ( 1) 00:07:25.651 12.209 - 12.258: 98.4116% ( 1) 00:07:25.651 12.357 - 12.406: 98.4286% ( 3) 00:07:25.651 12.702 - 12.800: 98.4570% ( 5) 00:07:25.651 12.800 - 12.898: 98.5194% ( 11) 00:07:25.651 12.898 - 12.997: 98.5761% ( 10) 00:07:25.651 12.997 - 13.095: 98.6612% ( 15) 00:07:25.651 13.095 - 13.194: 98.7520% ( 16) 00:07:25.651 13.194 - 13.292: 98.8428% ( 16) 00:07:25.651 13.292 - 13.391: 98.9165% ( 13) 00:07:25.651 13.391 - 13.489: 98.9789% ( 11) 00:07:25.651 13.489 - 13.588: 99.0526% ( 13) 00:07:25.651 13.588 - 13.686: 99.1321% ( 14) 00:07:25.651 13.686 - 13.785: 99.2285% ( 17) 00:07:25.651 13.785 - 13.883: 99.2909% ( 11) 00:07:25.651 13.883 - 13.982: 99.3590% ( 12) 00:07:25.651 13.982 - 14.080: 99.3930% ( 6) 00:07:25.651 14.080 - 14.178: 99.4497% ( 10) 00:07:25.651 14.178 - 14.277: 99.4781% ( 5) 00:07:25.651 14.277 - 14.375: 99.5405% ( 11) 00:07:25.651 14.375 - 14.474: 99.5859% ( 8) 00:07:25.651 14.474 - 14.572: 99.6199% ( 6) 00:07:25.651 14.572 - 14.671: 99.6483% ( 5) 00:07:25.651 14.671 - 14.769: 99.6596% ( 2) 00:07:25.651 14.769 - 14.868: 99.6767% ( 3) 00:07:25.651 14.868 - 14.966: 99.6880% ( 2) 00:07:25.651 14.966 - 15.065: 99.6937% ( 1) 00:07:25.651 15.065 - 15.163: 99.7107% ( 3) 00:07:25.651 15.262 - 15.360: 99.7164% ( 1) 00:07:25.651 15.458 - 15.557: 99.7277% ( 2) 00:07:25.651 15.557 - 15.655: 99.7391% ( 2) 00:07:25.651 15.655 - 15.754: 99.7504% ( 2) 00:07:25.651 15.951 - 16.049: 99.7561% ( 1) 00:07:25.651 16.049 - 16.148: 99.7617% ( 1) 00:07:25.651 16.345 - 16.443: 99.7731% ( 2) 00:07:25.651 16.443 - 16.542: 99.7788% ( 1) 00:07:25.651 16.640 - 16.738: 99.7844% ( 1) 00:07:25.651 16.738 - 16.837: 99.7901% ( 1) 00:07:25.651 16.837 - 16.935: 99.7958% ( 1) 00:07:25.651 16.935 - 17.034: 99.8071% ( 2) 00:07:25.651 17.034 - 17.132: 99.8128% ( 1) 00:07:25.651 17.625 - 17.723: 99.8185% ( 1) 00:07:25.651 17.920 - 18.018: 99.8298% ( 2) 00:07:25.651 18.117 - 18.215: 99.8355% ( 1) 00:07:25.651 18.412 - 18.511: 99.8525% ( 3) 00:07:25.651 18.511 - 18.609: 99.8582% ( 1) 00:07:25.651 18.609 - 18.708: 99.8639% ( 1) 00:07:25.651 19.298 - 19.397: 99.8695% ( 1) 00:07:25.651 19.495 - 19.594: 99.8752% ( 1) 00:07:25.651 19.594 - 19.692: 99.8809% ( 1) 00:07:25.651 19.988 - 20.086: 99.8865% ( 1) 00:07:25.651 20.086 - 20.185: 99.8922% ( 1) 00:07:25.651 20.185 - 20.283: 99.8979% ( 1) 00:07:25.651 22.843 - 22.942: 99.9036% ( 1) 00:07:25.651 22.942 - 23.040: 99.9092% ( 1) 00:07:25.651 24.025 - 24.123: 99.9149% ( 1) 00:07:25.651 25.009 - 25.108: 99.9206% ( 1) 00:07:25.651 25.797 - 25.994: 99.9263% ( 1) 00:07:25.651 26.978 - 27.175: 99.9319% ( 1) 00:07:25.651 28.160 - 28.357: 99.9376% ( 1) 00:07:25.651 28.554 - 28.751: 99.9433% ( 1) 00:07:25.651 29.342 - 29.538: 99.9489% ( 1) 00:07:25.651 32.689 - 32.886: 99.9546% ( 1) 00:07:25.651 36.037 - 36.234: 99.9603% ( 1) 00:07:25.651 43.520 - 43.717: 99.9660% ( 1) 00:07:25.651 47.655 - 47.852: 99.9716% ( 1) 00:07:25.651 50.806 - 51.200: 99.9773% ( 1) 00:07:25.651 51.988 - 52.382: 99.9830% ( 1) 00:07:25.651 53.563 - 53.957: 99.9887% ( 1) 00:07:25.651 63.803 - 64.197: 99.9943% ( 1) 00:07:25.651 148.874 - 149.662: 100.0000% ( 1) 00:07:25.651 00:07:25.651 00:07:25.651 real 0m1.219s 00:07:25.651 user 0m1.075s 00:07:25.651 sys 0m0.094s 00:07:25.651 20:18:09 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:25.651 20:18:09 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:07:25.651 ************************************ 00:07:25.651 END TEST nvme_overhead 00:07:25.651 ************************************ 00:07:25.651 20:18:09 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:25.651 20:18:09 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:07:25.651 20:18:09 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:25.651 20:18:09 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:25.651 ************************************ 00:07:25.651 START TEST nvme_arbitration 00:07:25.651 ************************************ 00:07:25.651 20:18:09 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:28.928 Initializing NVMe Controllers 00:07:28.928 Attached to 0000:00:13.0 00:07:28.928 Attached to 0000:00:10.0 00:07:28.928 Attached to 0000:00:11.0 00:07:28.928 Attached to 0000:00:12.0 00:07:28.928 Associating QEMU NVMe Ctrl (12343 ) with lcore 0 00:07:28.928 Associating QEMU NVMe Ctrl (12340 ) with lcore 1 00:07:28.928 Associating QEMU NVMe Ctrl (12341 ) with lcore 2 00:07:28.928 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:07:28.928 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:07:28.928 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:07:28.928 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:07:28.928 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:07:28.928 Initialization complete. Launching workers. 00:07:28.928 Starting thread on core 1 with urgent priority queue 00:07:28.928 Starting thread on core 2 with urgent priority queue 00:07:28.928 Starting thread on core 3 with urgent priority queue 00:07:28.928 Starting thread on core 0 with urgent priority queue 00:07:28.928 QEMU NVMe Ctrl (12343 ) core 0: 810.67 IO/s 123.36 secs/100000 ios 00:07:28.928 QEMU NVMe Ctrl (12342 ) core 0: 810.67 IO/s 123.36 secs/100000 ios 00:07:28.928 QEMU NVMe Ctrl (12340 ) core 1: 938.67 IO/s 106.53 secs/100000 ios 00:07:28.928 QEMU NVMe Ctrl (12342 ) core 1: 938.67 IO/s 106.53 secs/100000 ios 00:07:28.928 QEMU NVMe Ctrl (12341 ) core 2: 981.33 IO/s 101.90 secs/100000 ios 00:07:28.928 QEMU NVMe Ctrl (12342 ) core 3: 1002.67 IO/s 99.73 secs/100000 ios 00:07:28.928 ======================================================== 00:07:28.928 00:07:28.928 00:07:28.928 real 0m3.291s 00:07:28.928 user 0m9.207s 00:07:28.928 sys 0m0.114s 00:07:28.929 20:18:12 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:28.929 20:18:12 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:07:28.929 ************************************ 00:07:28.929 END TEST nvme_arbitration 00:07:28.929 ************************************ 00:07:28.929 20:18:13 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:07:28.929 20:18:13 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:28.929 20:18:13 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:28.929 20:18:13 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:28.929 ************************************ 00:07:28.929 START TEST nvme_single_aen 00:07:28.929 ************************************ 00:07:28.929 20:18:13 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:07:29.187 Asynchronous Event Request test 00:07:29.187 Attached to 0000:00:13.0 00:07:29.187 Attached to 0000:00:10.0 00:07:29.187 Attached to 0000:00:11.0 00:07:29.187 Attached to 0000:00:12.0 00:07:29.187 Reset controller to setup AER completions for this process 00:07:29.187 Registering asynchronous event callbacks... 00:07:29.187 Getting orig temperature thresholds of all controllers 00:07:29.187 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:29.187 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:29.187 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:29.187 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:29.187 Setting all controllers temperature threshold low to trigger AER 00:07:29.187 Waiting for all controllers temperature threshold to be set lower 00:07:29.187 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:29.187 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:07:29.187 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:29.187 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:07:29.187 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:29.187 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:07:29.187 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:29.187 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:07:29.187 Waiting for all controllers to trigger AER and reset threshold 00:07:29.187 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:29.187 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:29.187 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:29.187 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:29.187 Cleaning up... 00:07:29.187 00:07:29.187 real 0m0.214s 00:07:29.187 user 0m0.072s 00:07:29.187 sys 0m0.099s 00:07:29.187 20:18:13 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:29.187 20:18:13 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:07:29.187 ************************************ 00:07:29.187 END TEST nvme_single_aen 00:07:29.187 ************************************ 00:07:29.187 20:18:13 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:07:29.187 20:18:13 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:29.187 20:18:13 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:29.187 20:18:13 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:29.187 ************************************ 00:07:29.187 START TEST nvme_doorbell_aers 00:07:29.187 ************************************ 00:07:29.187 20:18:13 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:07:29.187 20:18:13 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:07:29.187 20:18:13 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:07:29.187 20:18:13 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:07:29.187 20:18:13 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:07:29.187 20:18:13 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:29.187 20:18:13 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:07:29.187 20:18:13 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:29.187 20:18:13 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:29.187 20:18:13 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:29.187 20:18:13 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:29.187 20:18:13 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:29.187 20:18:13 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:29.187 20:18:13 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:07:29.445 [2024-12-12 20:18:13.554336] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65058) is not found. Dropping the request. 00:07:39.412 Executing: test_write_invalid_db 00:07:39.412 Waiting for AER completion... 00:07:39.412 Failure: test_write_invalid_db 00:07:39.412 00:07:39.412 Executing: test_invalid_db_write_overflow_sq 00:07:39.412 Waiting for AER completion... 00:07:39.412 Failure: test_invalid_db_write_overflow_sq 00:07:39.412 00:07:39.412 Executing: test_invalid_db_write_overflow_cq 00:07:39.412 Waiting for AER completion... 00:07:39.412 Failure: test_invalid_db_write_overflow_cq 00:07:39.412 00:07:39.412 20:18:23 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:39.412 20:18:23 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:07:39.412 [2024-12-12 20:18:23.570270] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65058) is not found. Dropping the request. 00:07:49.376 Executing: test_write_invalid_db 00:07:49.376 Waiting for AER completion... 00:07:49.376 Failure: test_write_invalid_db 00:07:49.376 00:07:49.376 Executing: test_invalid_db_write_overflow_sq 00:07:49.376 Waiting for AER completion... 00:07:49.376 Failure: test_invalid_db_write_overflow_sq 00:07:49.376 00:07:49.376 Executing: test_invalid_db_write_overflow_cq 00:07:49.376 Waiting for AER completion... 00:07:49.376 Failure: test_invalid_db_write_overflow_cq 00:07:49.376 00:07:49.376 20:18:33 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:49.376 20:18:33 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:07:49.376 [2024-12-12 20:18:33.597562] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65058) is not found. Dropping the request. 00:07:59.341 Executing: test_write_invalid_db 00:07:59.341 Waiting for AER completion... 00:07:59.341 Failure: test_write_invalid_db 00:07:59.341 00:07:59.341 Executing: test_invalid_db_write_overflow_sq 00:07:59.341 Waiting for AER completion... 00:07:59.341 Failure: test_invalid_db_write_overflow_sq 00:07:59.341 00:07:59.341 Executing: test_invalid_db_write_overflow_cq 00:07:59.341 Waiting for AER completion... 00:07:59.341 Failure: test_invalid_db_write_overflow_cq 00:07:59.341 00:07:59.341 20:18:43 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:59.341 20:18:43 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:07:59.600 [2024-12-12 20:18:43.654911] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65058) is not found. Dropping the request. 00:08:09.570 Executing: test_write_invalid_db 00:08:09.570 Waiting for AER completion... 00:08:09.570 Failure: test_write_invalid_db 00:08:09.570 00:08:09.570 Executing: test_invalid_db_write_overflow_sq 00:08:09.570 Waiting for AER completion... 00:08:09.570 Failure: test_invalid_db_write_overflow_sq 00:08:09.570 00:08:09.570 Executing: test_invalid_db_write_overflow_cq 00:08:09.570 Waiting for AER completion... 00:08:09.570 Failure: test_invalid_db_write_overflow_cq 00:08:09.570 00:08:09.570 00:08:09.570 real 0m40.200s 00:08:09.570 user 0m34.229s 00:08:09.570 sys 0m5.597s 00:08:09.570 20:18:53 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:09.570 20:18:53 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:08:09.570 ************************************ 00:08:09.570 END TEST nvme_doorbell_aers 00:08:09.570 ************************************ 00:08:09.570 20:18:53 nvme -- nvme/nvme.sh@97 -- # uname 00:08:09.570 20:18:53 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:08:09.570 20:18:53 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:09.570 20:18:53 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:09.570 20:18:53 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:09.570 20:18:53 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:09.570 ************************************ 00:08:09.570 START TEST nvme_multi_aen 00:08:09.570 ************************************ 00:08:09.570 20:18:53 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:09.570 [2024-12-12 20:18:53.681812] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65058) is not found. Dropping the request. 00:08:09.570 [2024-12-12 20:18:53.681865] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65058) is not found. Dropping the request. 00:08:09.570 [2024-12-12 20:18:53.681876] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65058) is not found. Dropping the request. 00:08:09.570 [2024-12-12 20:18:53.683386] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65058) is not found. Dropping the request. 00:08:09.570 [2024-12-12 20:18:53.683429] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65058) is not found. Dropping the request. 00:08:09.570 [2024-12-12 20:18:53.683438] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65058) is not found. Dropping the request. 00:08:09.570 [2024-12-12 20:18:53.684521] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65058) is not found. Dropping the request. 00:08:09.570 [2024-12-12 20:18:53.684545] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65058) is not found. Dropping the request. 00:08:09.570 [2024-12-12 20:18:53.684552] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65058) is not found. Dropping the request. 00:08:09.570 [2024-12-12 20:18:53.685543] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65058) is not found. Dropping the request. 00:08:09.570 [2024-12-12 20:18:53.685565] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65058) is not found. Dropping the request. 00:08:09.570 [2024-12-12 20:18:53.685572] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65058) is not found. Dropping the request. 00:08:09.570 Child process pid: 65584 00:08:09.828 [Child] Asynchronous Event Request test 00:08:09.828 [Child] Attached to 0000:00:13.0 00:08:09.828 [Child] Attached to 0000:00:10.0 00:08:09.828 [Child] Attached to 0000:00:11.0 00:08:09.828 [Child] Attached to 0000:00:12.0 00:08:09.828 [Child] Registering asynchronous event callbacks... 00:08:09.828 [Child] Getting orig temperature thresholds of all controllers 00:08:09.828 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:09.828 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:09.828 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:09.828 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:09.828 [Child] Waiting for all controllers to trigger AER and reset threshold 00:08:09.828 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:09.828 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:09.828 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:09.828 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:09.828 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:09.828 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:09.829 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:09.829 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:09.829 [Child] Cleaning up... 00:08:09.829 Asynchronous Event Request test 00:08:09.829 Attached to 0000:00:13.0 00:08:09.829 Attached to 0000:00:10.0 00:08:09.829 Attached to 0000:00:11.0 00:08:09.829 Attached to 0000:00:12.0 00:08:09.829 Reset controller to setup AER completions for this process 00:08:09.829 Registering asynchronous event callbacks... 00:08:09.829 Getting orig temperature thresholds of all controllers 00:08:09.829 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:09.829 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:09.829 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:09.829 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:09.829 Setting all controllers temperature threshold low to trigger AER 00:08:09.829 Waiting for all controllers temperature threshold to be set lower 00:08:09.829 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:09.829 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:09.829 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:09.829 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:09.829 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:09.829 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:09.829 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:09.829 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:09.829 Waiting for all controllers to trigger AER and reset threshold 00:08:09.829 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:09.829 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:09.829 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:09.829 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:09.829 Cleaning up... 00:08:09.829 00:08:09.829 real 0m0.421s 00:08:09.829 user 0m0.134s 00:08:09.829 sys 0m0.187s 00:08:09.829 20:18:53 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:09.829 20:18:53 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:08:09.829 ************************************ 00:08:09.829 END TEST nvme_multi_aen 00:08:09.829 ************************************ 00:08:09.829 20:18:53 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:09.829 20:18:53 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:09.829 20:18:53 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:09.829 20:18:53 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:09.829 ************************************ 00:08:09.829 START TEST nvme_startup 00:08:09.829 ************************************ 00:08:09.829 20:18:53 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:10.086 Initializing NVMe Controllers 00:08:10.086 Attached to 0000:00:13.0 00:08:10.086 Attached to 0000:00:10.0 00:08:10.086 Attached to 0000:00:11.0 00:08:10.086 Attached to 0000:00:12.0 00:08:10.086 Initialization complete. 00:08:10.086 Time used:146731.000 (us). 00:08:10.086 ************************************ 00:08:10.086 END TEST nvme_startup 00:08:10.086 ************************************ 00:08:10.086 00:08:10.086 real 0m0.207s 00:08:10.086 user 0m0.071s 00:08:10.086 sys 0m0.091s 00:08:10.086 20:18:54 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:10.086 20:18:54 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:08:10.086 20:18:54 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:08:10.086 20:18:54 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:10.086 20:18:54 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:10.086 20:18:54 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:10.086 ************************************ 00:08:10.086 START TEST nvme_multi_secondary 00:08:10.086 ************************************ 00:08:10.086 20:18:54 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:08:10.086 20:18:54 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=65633 00:08:10.086 20:18:54 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=65634 00:08:10.086 20:18:54 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:08:10.086 20:18:54 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:10.087 20:18:54 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:08:13.365 Initializing NVMe Controllers 00:08:13.365 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:13.365 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:13.365 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:13.365 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:13.365 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:13.365 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:13.365 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:13.366 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:13.366 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:13.366 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:13.366 Initialization complete. Launching workers. 00:08:13.366 ======================================================== 00:08:13.366 Latency(us) 00:08:13.366 Device Information : IOPS MiB/s Average min max 00:08:13.366 PCIE (0000:00:13.0) NSID 1 from core 1: 7507.28 29.33 2130.85 1006.62 6851.35 00:08:13.366 PCIE (0000:00:10.0) NSID 1 from core 1: 7507.28 29.33 2129.93 961.33 5774.76 00:08:13.366 PCIE (0000:00:11.0) NSID 1 from core 1: 7507.28 29.33 2130.98 1058.67 6198.25 00:08:13.366 PCIE (0000:00:12.0) NSID 1 from core 1: 7507.28 29.33 2131.06 1036.82 6138.49 00:08:13.366 PCIE (0000:00:12.0) NSID 2 from core 1: 7507.28 29.33 2131.12 988.37 6318.28 00:08:13.366 PCIE (0000:00:12.0) NSID 3 from core 1: 7507.28 29.33 2131.23 1023.68 6866.47 00:08:13.366 ======================================================== 00:08:13.366 Total : 45043.66 175.95 2130.86 961.33 6866.47 00:08:13.366 00:08:13.623 Initializing NVMe Controllers 00:08:13.623 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:13.623 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:13.623 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:13.623 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:13.623 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:13.623 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:13.623 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:13.623 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:13.623 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:13.623 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:13.623 Initialization complete. Launching workers. 00:08:13.623 ======================================================== 00:08:13.623 Latency(us) 00:08:13.623 Device Information : IOPS MiB/s Average min max 00:08:13.623 PCIE (0000:00:13.0) NSID 1 from core 2: 3267.33 12.76 4896.13 769.82 12426.51 00:08:13.623 PCIE (0000:00:10.0) NSID 1 from core 2: 3267.33 12.76 4894.92 753.83 12821.41 00:08:13.623 PCIE (0000:00:11.0) NSID 1 from core 2: 3267.33 12.76 4896.08 785.02 12121.50 00:08:13.623 PCIE (0000:00:12.0) NSID 1 from core 2: 3267.33 12.76 4896.39 782.80 12166.59 00:08:13.623 PCIE (0000:00:12.0) NSID 2 from core 2: 3267.33 12.76 4896.03 774.14 12612.44 00:08:13.623 PCIE (0000:00:12.0) NSID 3 from core 2: 3267.33 12.76 4896.37 781.79 12686.64 00:08:13.623 ======================================================== 00:08:13.623 Total : 19603.97 76.58 4895.99 753.83 12821.41 00:08:13.623 00:08:13.623 20:18:57 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 65633 00:08:15.532 Initializing NVMe Controllers 00:08:15.532 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:15.532 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:15.532 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:15.532 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:15.532 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:15.532 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:15.532 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:15.532 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:15.532 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:15.532 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:15.532 Initialization complete. Launching workers. 00:08:15.532 ======================================================== 00:08:15.532 Latency(us) 00:08:15.532 Device Information : IOPS MiB/s Average min max 00:08:15.532 PCIE (0000:00:13.0) NSID 1 from core 0: 10711.49 41.84 1493.35 722.44 5404.60 00:08:15.532 PCIE (0000:00:10.0) NSID 1 from core 0: 10711.49 41.84 1492.46 704.57 5654.48 00:08:15.532 PCIE (0000:00:11.0) NSID 1 from core 0: 10711.49 41.84 1493.33 723.73 6225.05 00:08:15.532 PCIE (0000:00:12.0) NSID 1 from core 0: 10711.49 41.84 1493.31 670.61 5945.97 00:08:15.532 PCIE (0000:00:12.0) NSID 2 from core 0: 10711.49 41.84 1493.30 639.26 5814.61 00:08:15.532 PCIE (0000:00:12.0) NSID 3 from core 0: 10711.49 41.84 1493.28 577.97 5913.15 00:08:15.532 ======================================================== 00:08:15.532 Total : 64268.91 251.05 1493.17 577.97 6225.05 00:08:15.532 00:08:15.532 20:18:59 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 65634 00:08:15.532 20:18:59 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=65709 00:08:15.532 20:18:59 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:08:15.532 20:18:59 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=65710 00:08:15.532 20:18:59 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:08:15.532 20:18:59 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:18.813 Initializing NVMe Controllers 00:08:18.813 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:18.813 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:18.813 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:18.813 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:18.813 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:18.813 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:18.813 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:18.813 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:18.813 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:18.813 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:18.813 Initialization complete. Launching workers. 00:08:18.813 ======================================================== 00:08:18.813 Latency(us) 00:08:18.813 Device Information : IOPS MiB/s Average min max 00:08:18.813 PCIE (0000:00:13.0) NSID 1 from core 0: 7989.72 31.21 2002.15 744.44 5586.98 00:08:18.813 PCIE (0000:00:10.0) NSID 1 from core 0: 7989.72 31.21 2001.29 713.28 5609.05 00:08:18.813 PCIE (0000:00:11.0) NSID 1 from core 0: 7989.72 31.21 2002.46 727.54 5429.37 00:08:18.813 PCIE (0000:00:12.0) NSID 1 from core 0: 7989.72 31.21 2002.62 749.91 6109.20 00:08:18.813 PCIE (0000:00:12.0) NSID 2 from core 0: 7989.72 31.21 2002.60 741.83 5916.16 00:08:18.813 PCIE (0000:00:12.0) NSID 3 from core 0: 7989.72 31.21 2002.56 752.43 5783.47 00:08:18.813 ======================================================== 00:08:18.813 Total : 47938.33 187.26 2002.28 713.28 6109.20 00:08:18.813 00:08:18.813 Initializing NVMe Controllers 00:08:18.813 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:18.813 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:18.813 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:18.813 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:18.813 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:18.813 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:18.813 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:18.813 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:18.813 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:18.813 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:18.813 Initialization complete. Launching workers. 00:08:18.813 ======================================================== 00:08:18.813 Latency(us) 00:08:18.813 Device Information : IOPS MiB/s Average min max 00:08:18.813 PCIE (0000:00:13.0) NSID 1 from core 1: 8045.57 31.43 1988.25 700.29 6798.49 00:08:18.813 PCIE (0000:00:10.0) NSID 1 from core 1: 8045.57 31.43 1987.33 679.02 6827.98 00:08:18.813 PCIE (0000:00:11.0) NSID 1 from core 1: 8045.57 31.43 1988.29 698.61 6683.53 00:08:18.813 PCIE (0000:00:12.0) NSID 1 from core 1: 8045.57 31.43 1988.24 705.33 6713.22 00:08:18.813 PCIE (0000:00:12.0) NSID 2 from core 1: 8045.57 31.43 1988.21 696.10 6650.20 00:08:18.813 PCIE (0000:00:12.0) NSID 3 from core 1: 8045.57 31.43 1988.16 699.86 6695.00 00:08:18.813 ======================================================== 00:08:18.813 Total : 48273.44 188.57 1988.08 679.02 6827.98 00:08:18.813 00:08:20.711 Initializing NVMe Controllers 00:08:20.711 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:20.711 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:20.711 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:20.711 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:20.711 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:20.711 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:20.711 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:20.711 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:20.711 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:20.711 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:20.711 Initialization complete. Launching workers. 00:08:20.711 ======================================================== 00:08:20.711 Latency(us) 00:08:20.711 Device Information : IOPS MiB/s Average min max 00:08:20.711 PCIE (0000:00:13.0) NSID 1 from core 2: 4735.37 18.50 3378.50 730.91 13261.11 00:08:20.711 PCIE (0000:00:10.0) NSID 1 from core 2: 4735.37 18.50 3377.40 707.68 13462.86 00:08:20.711 PCIE (0000:00:11.0) NSID 1 from core 2: 4735.37 18.50 3378.16 668.35 13057.11 00:08:20.711 PCIE (0000:00:12.0) NSID 1 from core 2: 4735.37 18.50 3377.97 716.96 12410.68 00:08:20.711 PCIE (0000:00:12.0) NSID 2 from core 2: 4735.37 18.50 3378.09 749.88 12311.52 00:08:20.711 PCIE (0000:00:12.0) NSID 3 from core 2: 4735.37 18.50 3378.22 723.17 12324.11 00:08:20.711 ======================================================== 00:08:20.711 Total : 28412.24 110.99 3378.06 668.35 13462.86 00:08:20.711 00:08:20.711 20:19:04 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 65709 00:08:20.711 20:19:04 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 65710 00:08:20.711 00:08:20.711 real 0m10.595s 00:08:20.711 user 0m18.394s 00:08:20.711 sys 0m0.587s 00:08:20.711 20:19:04 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:20.711 20:19:04 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:08:20.711 ************************************ 00:08:20.711 END TEST nvme_multi_secondary 00:08:20.711 ************************************ 00:08:20.711 20:19:04 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:08:20.711 20:19:04 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:08:20.711 20:19:04 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/64666 ]] 00:08:20.711 20:19:04 nvme -- common/autotest_common.sh@1094 -- # kill 64666 00:08:20.711 20:19:04 nvme -- common/autotest_common.sh@1095 -- # wait 64666 00:08:20.711 [2024-12-12 20:19:04.838326] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65583) is not found. Dropping the request. 00:08:20.711 [2024-12-12 20:19:04.838441] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65583) is not found. Dropping the request. 00:08:20.711 [2024-12-12 20:19:04.838480] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65583) is not found. Dropping the request. 00:08:20.711 [2024-12-12 20:19:04.838503] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65583) is not found. Dropping the request. 00:08:20.711 [2024-12-12 20:19:04.841344] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65583) is not found. Dropping the request. 00:08:20.711 [2024-12-12 20:19:04.841405] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65583) is not found. Dropping the request. 00:08:20.711 [2024-12-12 20:19:04.841435] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65583) is not found. Dropping the request. 00:08:20.711 [2024-12-12 20:19:04.841449] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65583) is not found. Dropping the request. 00:08:20.711 [2024-12-12 20:19:04.844065] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65583) is not found. Dropping the request. 00:08:20.711 [2024-12-12 20:19:04.844120] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65583) is not found. Dropping the request. 00:08:20.711 [2024-12-12 20:19:04.844137] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65583) is not found. Dropping the request. 00:08:20.711 [2024-12-12 20:19:04.844153] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65583) is not found. Dropping the request. 00:08:20.711 [2024-12-12 20:19:04.846707] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65583) is not found. Dropping the request. 00:08:20.711 [2024-12-12 20:19:04.846770] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65583) is not found. Dropping the request. 00:08:20.711 [2024-12-12 20:19:04.846787] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65583) is not found. Dropping the request. 00:08:20.711 [2024-12-12 20:19:04.846802] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65583) is not found. Dropping the request. 00:08:20.970 20:19:04 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:08:20.970 20:19:04 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:08:20.970 20:19:04 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:20.970 20:19:04 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:20.970 20:19:04 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:20.970 20:19:04 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:20.970 ************************************ 00:08:20.970 START TEST bdev_nvme_reset_stuck_adm_cmd 00:08:20.970 ************************************ 00:08:20.970 20:19:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:20.970 * Looking for test storage... 00:08:20.970 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:20.970 20:19:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:20.970 20:19:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:20.970 20:19:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lcov --version 00:08:20.970 20:19:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:20.970 20:19:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:20.970 20:19:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:20.970 20:19:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:20.970 20:19:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:08:20.970 20:19:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:08:20.970 20:19:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:08:20.970 20:19:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:08:20.970 20:19:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:08:20.970 20:19:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:08:20.970 20:19:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:08:20.970 20:19:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:20.970 20:19:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:08:20.970 20:19:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:08:20.970 20:19:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:20.970 20:19:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:20.970 20:19:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:08:20.970 20:19:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:08:20.970 20:19:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:20.970 20:19:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:08:20.970 20:19:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:08:20.970 20:19:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:08:20.970 20:19:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:08:20.970 20:19:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:20.970 20:19:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:08:20.970 20:19:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:08:20.970 20:19:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:20.970 20:19:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:20.970 20:19:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:08:20.970 20:19:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:20.970 20:19:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:20.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.970 --rc genhtml_branch_coverage=1 00:08:20.970 --rc genhtml_function_coverage=1 00:08:20.970 --rc genhtml_legend=1 00:08:20.970 --rc geninfo_all_blocks=1 00:08:20.970 --rc geninfo_unexecuted_blocks=1 00:08:20.970 00:08:20.970 ' 00:08:20.970 20:19:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:20.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.970 --rc genhtml_branch_coverage=1 00:08:20.970 --rc genhtml_function_coverage=1 00:08:20.970 --rc genhtml_legend=1 00:08:20.970 --rc geninfo_all_blocks=1 00:08:20.970 --rc geninfo_unexecuted_blocks=1 00:08:20.970 00:08:20.970 ' 00:08:20.970 20:19:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:20.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.970 --rc genhtml_branch_coverage=1 00:08:20.970 --rc genhtml_function_coverage=1 00:08:20.970 --rc genhtml_legend=1 00:08:20.970 --rc geninfo_all_blocks=1 00:08:20.970 --rc geninfo_unexecuted_blocks=1 00:08:20.970 00:08:20.970 ' 00:08:20.970 20:19:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:20.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.970 --rc genhtml_branch_coverage=1 00:08:20.970 --rc genhtml_function_coverage=1 00:08:20.970 --rc genhtml_legend=1 00:08:20.970 --rc geninfo_all_blocks=1 00:08:20.970 --rc geninfo_unexecuted_blocks=1 00:08:20.970 00:08:20.970 ' 00:08:20.970 20:19:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:08:20.970 20:19:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:08:20.970 20:19:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:08:20.970 20:19:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:08:20.970 20:19:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:08:20.970 20:19:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:08:20.970 20:19:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:08:20.970 20:19:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:08:20.970 20:19:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:08:20.970 20:19:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:08:20.970 20:19:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:20.970 20:19:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:08:20.970 20:19:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:20.970 20:19:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:20.970 20:19:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:20.970 20:19:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:20.970 20:19:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:20.970 20:19:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:08:20.970 20:19:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:08:20.970 20:19:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:08:20.970 20:19:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=65867 00:08:20.970 20:19:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:20.970 20:19:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 65867 00:08:20.970 20:19:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:08:20.970 20:19:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 65867 ']' 00:08:20.970 20:19:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:20.970 20:19:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:20.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:20.970 20:19:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:20.970 20:19:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:20.970 20:19:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:21.227 [2024-12-12 20:19:05.254144] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:08:21.227 [2024-12-12 20:19:05.254255] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65867 ] 00:08:21.227 [2024-12-12 20:19:05.417389] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:21.485 [2024-12-12 20:19:05.516675] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:21.485 [2024-12-12 20:19:05.517246] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:08:21.485 [2024-12-12 20:19:05.517604] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:08:21.485 [2024-12-12 20:19:05.517696] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.051 20:19:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:22.051 20:19:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:08:22.051 20:19:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:08:22.051 20:19:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.051 20:19:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:22.051 nvme0n1 00:08:22.051 20:19:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.051 20:19:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:08:22.051 20:19:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_n2iR1.txt 00:08:22.051 20:19:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:08:22.051 20:19:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.051 20:19:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:22.051 true 00:08:22.051 20:19:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.051 20:19:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:08:22.051 20:19:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1734034746 00:08:22.051 20:19:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=65890 00:08:22.051 20:19:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:22.051 20:19:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:08:22.051 20:19:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:08:23.991 20:19:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:08:23.991 20:19:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.991 20:19:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:23.991 [2024-12-12 20:19:08.199477] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:08:23.991 [2024-12-12 20:19:08.199791] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:08:23.991 [2024-12-12 20:19:08.199827] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:08:23.991 [2024-12-12 20:19:08.199840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:23.991 [2024-12-12 20:19:08.201367] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:08:23.991 20:19:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.991 20:19:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 65890 00:08:23.991 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 65890 00:08:23.991 20:19:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 65890 00:08:24.249 20:19:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:08:24.249 20:19:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:08:24.249 20:19:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:08:24.249 20:19:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.249 20:19:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:24.249 20:19:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.249 20:19:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:08:24.249 20:19:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_n2iR1.txt 00:08:24.249 20:19:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:08:24.249 20:19:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:08:24.249 20:19:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:24.249 20:19:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:24.249 20:19:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:24.249 20:19:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:24.249 20:19:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:24.249 20:19:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:24.250 20:19:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:08:24.250 20:19:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:08:24.250 20:19:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:08:24.250 20:19:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:24.250 20:19:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:24.250 20:19:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:24.250 20:19:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:24.250 20:19:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:24.250 20:19:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:24.250 20:19:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:08:24.250 20:19:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:08:24.250 20:19:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_n2iR1.txt 00:08:24.250 20:19:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 65867 00:08:24.250 20:19:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 65867 ']' 00:08:24.250 20:19:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 65867 00:08:24.250 20:19:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:08:24.250 20:19:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:24.250 20:19:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65867 00:08:24.250 20:19:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:24.250 20:19:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:24.250 20:19:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65867' 00:08:24.250 killing process with pid 65867 00:08:24.250 20:19:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 65867 00:08:24.250 20:19:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 65867 00:08:25.624 20:19:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:08:25.624 20:19:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:08:25.624 00:08:25.624 real 0m4.550s 00:08:25.624 user 0m16.131s 00:08:25.624 sys 0m0.498s 00:08:25.624 20:19:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:25.624 20:19:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:25.624 ************************************ 00:08:25.624 END TEST bdev_nvme_reset_stuck_adm_cmd 00:08:25.624 ************************************ 00:08:25.624 20:19:09 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:08:25.624 20:19:09 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:08:25.624 20:19:09 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:25.624 20:19:09 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:25.624 20:19:09 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:25.624 ************************************ 00:08:25.624 START TEST nvme_fio 00:08:25.624 ************************************ 00:08:25.624 20:19:09 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:08:25.624 20:19:09 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:08:25.624 20:19:09 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:08:25.624 20:19:09 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:08:25.624 20:19:09 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:25.624 20:19:09 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:08:25.624 20:19:09 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:25.624 20:19:09 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:25.624 20:19:09 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:25.624 20:19:09 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:25.624 20:19:09 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:25.624 20:19:09 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:08:25.624 20:19:09 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:08:25.624 20:19:09 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:25.624 20:19:09 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:25.624 20:19:09 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:25.882 20:19:09 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:25.882 20:19:09 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:25.882 20:19:10 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:25.882 20:19:10 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:25.882 20:19:10 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:25.882 20:19:10 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:25.882 20:19:10 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:25.882 20:19:10 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:25.882 20:19:10 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:25.882 20:19:10 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:25.882 20:19:10 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:25.882 20:19:10 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:25.882 20:19:10 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:25.882 20:19:10 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:25.882 20:19:10 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:25.882 20:19:10 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:25.882 20:19:10 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:25.882 20:19:10 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:25.882 20:19:10 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:25.882 20:19:10 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:26.140 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:26.140 fio-3.35 00:08:26.140 Starting 1 thread 00:08:30.321 00:08:30.321 test: (groupid=0, jobs=1): err= 0: pid=66025: Thu Dec 12 20:19:14 2024 00:08:30.321 read: IOPS=19.9k, BW=77.7MiB/s (81.5MB/s)(156MiB/2001msec) 00:08:30.321 slat (nsec): min=3345, max=79896, avg=5326.43, stdev=2808.71 00:08:30.321 clat (usec): min=274, max=9469, avg=3207.97, stdev=1200.25 00:08:30.321 lat (usec): min=279, max=9505, avg=3213.30, stdev=1201.57 00:08:30.321 clat percentiles (usec): 00:08:30.321 | 1.00th=[ 2008], 5.00th=[ 2212], 10.00th=[ 2311], 20.00th=[ 2409], 00:08:30.321 | 30.00th=[ 2474], 40.00th=[ 2573], 50.00th=[ 2704], 60.00th=[ 2868], 00:08:30.321 | 70.00th=[ 3163], 80.00th=[ 3982], 90.00th=[ 5211], 95.00th=[ 5932], 00:08:30.321 | 99.00th=[ 7111], 99.50th=[ 7504], 99.90th=[ 8291], 99.95th=[ 8717], 00:08:30.321 | 99.99th=[ 9372] 00:08:30.321 bw ( KiB/s): min=73040, max=85656, per=99.04%, avg=78840.00, stdev=6369.07, samples=3 00:08:30.321 iops : min=18260, max=21414, avg=19710.00, stdev=1592.27, samples=3 00:08:30.321 write: IOPS=19.9k, BW=77.6MiB/s (81.4MB/s)(155MiB/2001msec); 0 zone resets 00:08:30.321 slat (usec): min=3, max=230, avg= 5.50, stdev= 2.85 00:08:30.321 clat (usec): min=222, max=9399, avg=3205.14, stdev=1188.93 00:08:30.321 lat (usec): min=227, max=9416, avg=3210.64, stdev=1190.16 00:08:30.321 clat percentiles (usec): 00:08:30.321 | 1.00th=[ 2024], 5.00th=[ 2212], 10.00th=[ 2343], 20.00th=[ 2409], 00:08:30.321 | 30.00th=[ 2474], 40.00th=[ 2573], 50.00th=[ 2704], 60.00th=[ 2868], 00:08:30.321 | 70.00th=[ 3163], 80.00th=[ 3949], 90.00th=[ 5145], 95.00th=[ 5932], 00:08:30.321 | 99.00th=[ 7046], 99.50th=[ 7504], 99.90th=[ 8291], 99.95th=[ 8717], 00:08:30.321 | 99.99th=[ 9241] 00:08:30.321 bw ( KiB/s): min=73184, max=85432, per=99.35%, avg=78928.00, stdev=6159.27, samples=3 00:08:30.321 iops : min=18296, max=21358, avg=19732.00, stdev=1539.82, samples=3 00:08:30.321 lat (usec) : 250=0.01%, 500=0.02%, 750=0.02%, 1000=0.02% 00:08:30.321 lat (msec) : 2=0.87%, 4=79.38%, 10=19.70% 00:08:30.321 cpu : usr=99.00%, sys=0.05%, ctx=7, majf=0, minf=607 00:08:30.321 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:30.321 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:30.321 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:30.322 issued rwts: total=39824,39743,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:30.322 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:30.322 00:08:30.322 Run status group 0 (all jobs): 00:08:30.322 READ: bw=77.7MiB/s (81.5MB/s), 77.7MiB/s-77.7MiB/s (81.5MB/s-81.5MB/s), io=156MiB (163MB), run=2001-2001msec 00:08:30.322 WRITE: bw=77.6MiB/s (81.4MB/s), 77.6MiB/s-77.6MiB/s (81.4MB/s-81.4MB/s), io=155MiB (163MB), run=2001-2001msec 00:08:30.322 ----------------------------------------------------- 00:08:30.322 Suppressions used: 00:08:30.322 count bytes template 00:08:30.322 1 32 /usr/src/fio/parse.c 00:08:30.322 1 8 libtcmalloc_minimal.so 00:08:30.322 ----------------------------------------------------- 00:08:30.322 00:08:30.579 20:19:14 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:30.579 20:19:14 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:30.579 20:19:14 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:30.579 20:19:14 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:30.579 20:19:14 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:30.579 20:19:14 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:30.837 20:19:15 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:30.838 20:19:15 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:30.838 20:19:15 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:30.838 20:19:15 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:30.838 20:19:15 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:30.838 20:19:15 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:30.838 20:19:15 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:30.838 20:19:15 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:30.838 20:19:15 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:30.838 20:19:15 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:30.838 20:19:15 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:30.838 20:19:15 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:30.838 20:19:15 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:30.838 20:19:15 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:30.838 20:19:15 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:30.838 20:19:15 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:30.838 20:19:15 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:30.838 20:19:15 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:31.095 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:31.095 fio-3.35 00:08:31.095 Starting 1 thread 00:08:37.655 00:08:37.655 test: (groupid=0, jobs=1): err= 0: pid=66080: Thu Dec 12 20:19:20 2024 00:08:37.655 read: IOPS=20.6k, BW=80.6MiB/s (84.5MB/s)(161MiB/2001msec) 00:08:37.655 slat (nsec): min=3356, max=69509, avg=5173.36, stdev=2422.06 00:08:37.655 clat (usec): min=215, max=10393, avg=3095.34, stdev=1118.10 00:08:37.655 lat (usec): min=220, max=10428, avg=3100.52, stdev=1119.29 00:08:37.655 clat percentiles (usec): 00:08:37.655 | 1.00th=[ 1926], 5.00th=[ 2278], 10.00th=[ 2343], 20.00th=[ 2409], 00:08:37.655 | 30.00th=[ 2474], 40.00th=[ 2540], 50.00th=[ 2638], 60.00th=[ 2802], 00:08:37.655 | 70.00th=[ 3064], 80.00th=[ 3589], 90.00th=[ 4817], 95.00th=[ 5538], 00:08:37.655 | 99.00th=[ 7111], 99.50th=[ 7767], 99.90th=[ 9110], 99.95th=[ 9503], 00:08:37.655 | 99.99th=[10290] 00:08:37.655 bw ( KiB/s): min=74928, max=79912, per=94.47%, avg=77930.67, stdev=2644.32, samples=3 00:08:37.656 iops : min=18732, max=19978, avg=19482.67, stdev=661.08, samples=3 00:08:37.656 write: IOPS=20.6k, BW=80.3MiB/s (84.2MB/s)(161MiB/2001msec); 0 zone resets 00:08:37.656 slat (nsec): min=3496, max=81832, avg=5390.82, stdev=2526.20 00:08:37.656 clat (usec): min=251, max=10558, avg=3095.62, stdev=1096.91 00:08:37.656 lat (usec): min=256, max=10589, avg=3101.01, stdev=1098.07 00:08:37.656 clat percentiles (usec): 00:08:37.656 | 1.00th=[ 1942], 5.00th=[ 2278], 10.00th=[ 2343], 20.00th=[ 2409], 00:08:37.656 | 30.00th=[ 2474], 40.00th=[ 2540], 50.00th=[ 2638], 60.00th=[ 2802], 00:08:37.656 | 70.00th=[ 3064], 80.00th=[ 3589], 90.00th=[ 4817], 95.00th=[ 5538], 00:08:37.656 | 99.00th=[ 7046], 99.50th=[ 7439], 99.90th=[ 8848], 99.95th=[ 9503], 00:08:37.656 | 99.99th=[10290] 00:08:37.656 bw ( KiB/s): min=74848, max=80336, per=94.99%, avg=78101.33, stdev=2882.32, samples=3 00:08:37.656 iops : min=18712, max=20084, avg=19525.33, stdev=720.58, samples=3 00:08:37.656 lat (usec) : 250=0.01%, 500=0.02%, 750=0.02%, 1000=0.03% 00:08:37.656 lat (msec) : 2=1.15%, 4=82.49%, 10=16.27%, 20=0.02% 00:08:37.656 cpu : usr=99.10%, sys=0.00%, ctx=3, majf=0, minf=608 00:08:37.656 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:37.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:37.656 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:37.656 issued rwts: total=41267,41130,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:37.656 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:37.656 00:08:37.656 Run status group 0 (all jobs): 00:08:37.656 READ: bw=80.6MiB/s (84.5MB/s), 80.6MiB/s-80.6MiB/s (84.5MB/s-84.5MB/s), io=161MiB (169MB), run=2001-2001msec 00:08:37.656 WRITE: bw=80.3MiB/s (84.2MB/s), 80.3MiB/s-80.3MiB/s (84.2MB/s-84.2MB/s), io=161MiB (168MB), run=2001-2001msec 00:08:37.656 ----------------------------------------------------- 00:08:37.656 Suppressions used: 00:08:37.656 count bytes template 00:08:37.656 1 32 /usr/src/fio/parse.c 00:08:37.656 1 8 libtcmalloc_minimal.so 00:08:37.656 ----------------------------------------------------- 00:08:37.656 00:08:37.656 20:19:20 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:37.656 20:19:20 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:37.656 20:19:20 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:37.656 20:19:20 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:37.656 20:19:21 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:37.656 20:19:21 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:37.656 20:19:21 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:37.656 20:19:21 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:37.656 20:19:21 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:37.656 20:19:21 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:37.656 20:19:21 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:37.656 20:19:21 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:37.656 20:19:21 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:37.656 20:19:21 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:37.656 20:19:21 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:37.656 20:19:21 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:37.656 20:19:21 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:37.656 20:19:21 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:37.656 20:19:21 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:37.656 20:19:21 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:37.656 20:19:21 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:37.656 20:19:21 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:37.656 20:19:21 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:37.656 20:19:21 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:37.656 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:37.656 fio-3.35 00:08:37.656 Starting 1 thread 00:08:42.920 00:08:42.920 test: (groupid=0, jobs=1): err= 0: pid=66145: Thu Dec 12 20:19:26 2024 00:08:42.920 read: IOPS=17.0k, BW=66.3MiB/s (69.5MB/s)(133MiB/2001msec) 00:08:42.920 slat (nsec): min=4217, max=81140, avg=5659.14, stdev=2969.06 00:08:42.920 clat (usec): min=356, max=9646, avg=3747.66, stdev=1267.49 00:08:42.920 lat (usec): min=361, max=9651, avg=3753.32, stdev=1268.53 00:08:42.920 clat percentiles (usec): 00:08:42.920 | 1.00th=[ 2073], 5.00th=[ 2343], 10.00th=[ 2474], 20.00th=[ 2671], 00:08:42.920 | 30.00th=[ 2835], 40.00th=[ 3064], 50.00th=[ 3326], 60.00th=[ 3785], 00:08:42.920 | 70.00th=[ 4293], 80.00th=[ 4883], 90.00th=[ 5538], 95.00th=[ 6194], 00:08:42.920 | 99.00th=[ 7373], 99.50th=[ 7898], 99.90th=[ 8848], 99.95th=[ 9110], 00:08:42.920 | 99.99th=[ 9634] 00:08:42.920 bw ( KiB/s): min=60352, max=78096, per=100.00%, avg=68149.33, stdev=9065.16, samples=3 00:08:42.920 iops : min=15088, max=19524, avg=17037.33, stdev=2266.29, samples=3 00:08:42.920 write: IOPS=17.0k, BW=66.5MiB/s (69.7MB/s)(133MiB/2001msec); 0 zone resets 00:08:42.920 slat (nsec): min=4294, max=82260, avg=5883.80, stdev=3097.62 00:08:42.920 clat (usec): min=323, max=9775, avg=3755.47, stdev=1261.86 00:08:42.920 lat (usec): min=328, max=9792, avg=3761.36, stdev=1262.93 00:08:42.920 clat percentiles (usec): 00:08:42.920 | 1.00th=[ 2073], 5.00th=[ 2343], 10.00th=[ 2474], 20.00th=[ 2671], 00:08:42.920 | 30.00th=[ 2868], 40.00th=[ 3064], 50.00th=[ 3359], 60.00th=[ 3818], 00:08:42.920 | 70.00th=[ 4359], 80.00th=[ 4883], 90.00th=[ 5538], 95.00th=[ 6194], 00:08:42.920 | 99.00th=[ 7308], 99.50th=[ 7767], 99.90th=[ 8848], 99.95th=[ 9241], 00:08:42.920 | 99.99th=[ 9634] 00:08:42.920 bw ( KiB/s): min=60760, max=77624, per=99.86%, avg=67978.67, stdev=8689.95, samples=3 00:08:42.920 iops : min=15190, max=19406, avg=16994.67, stdev=2172.49, samples=3 00:08:42.920 lat (usec) : 500=0.03%, 750=0.01%, 1000=0.02% 00:08:42.920 lat (msec) : 2=0.60%, 4=63.13%, 10=36.19% 00:08:42.920 cpu : usr=98.65%, sys=0.15%, ctx=13, majf=0, minf=607 00:08:42.920 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:42.920 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:42.920 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:42.920 issued rwts: total=33967,34054,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:42.920 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:42.920 00:08:42.920 Run status group 0 (all jobs): 00:08:42.920 READ: bw=66.3MiB/s (69.5MB/s), 66.3MiB/s-66.3MiB/s (69.5MB/s-69.5MB/s), io=133MiB (139MB), run=2001-2001msec 00:08:42.920 WRITE: bw=66.5MiB/s (69.7MB/s), 66.5MiB/s-66.5MiB/s (69.7MB/s-69.7MB/s), io=133MiB (139MB), run=2001-2001msec 00:08:43.178 ----------------------------------------------------- 00:08:43.178 Suppressions used: 00:08:43.178 count bytes template 00:08:43.178 1 32 /usr/src/fio/parse.c 00:08:43.178 1 8 libtcmalloc_minimal.so 00:08:43.178 ----------------------------------------------------- 00:08:43.178 00:08:43.178 20:19:27 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:43.178 20:19:27 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:43.178 20:19:27 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:43.178 20:19:27 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:43.179 20:19:27 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:43.179 20:19:27 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:43.437 20:19:27 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:43.437 20:19:27 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:08:43.437 20:19:27 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:08:43.437 20:19:27 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:43.437 20:19:27 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:43.437 20:19:27 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:43.437 20:19:27 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:43.437 20:19:27 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:43.437 20:19:27 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:43.437 20:19:27 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:43.437 20:19:27 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:43.437 20:19:27 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:43.437 20:19:27 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:43.437 20:19:27 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:43.437 20:19:27 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:43.437 20:19:27 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:43.437 20:19:27 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:43.437 20:19:27 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:08:43.695 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:43.695 fio-3.35 00:08:43.695 Starting 1 thread 00:08:50.265 00:08:50.265 test: (groupid=0, jobs=1): err= 0: pid=66204: Thu Dec 12 20:19:34 2024 00:08:50.265 read: IOPS=15.8k, BW=61.6MiB/s (64.6MB/s)(123MiB/2002msec) 00:08:50.265 slat (usec): min=4, max=743, avg= 6.10, stdev= 5.42 00:08:50.265 clat (usec): min=1133, max=11833, avg=4040.24, stdev=1416.30 00:08:50.265 lat (usec): min=1137, max=11874, avg=4046.34, stdev=1417.45 00:08:50.265 clat percentiles (usec): 00:08:50.265 | 1.00th=[ 2114], 5.00th=[ 2376], 10.00th=[ 2540], 20.00th=[ 2737], 00:08:50.265 | 30.00th=[ 2966], 40.00th=[ 3228], 50.00th=[ 3621], 60.00th=[ 4293], 00:08:50.265 | 70.00th=[ 4817], 80.00th=[ 5342], 90.00th=[ 5997], 95.00th=[ 6652], 00:08:50.265 | 99.00th=[ 7635], 99.50th=[ 8356], 99.90th=[10159], 99.95th=[10552], 00:08:50.265 | 99.99th=[11731] 00:08:50.265 bw ( KiB/s): min=59832, max=71552, per=100.00%, avg=66210.67, stdev=5928.46, samples=3 00:08:50.265 iops : min=14958, max=17888, avg=16552.67, stdev=1482.12, samples=3 00:08:50.265 write: IOPS=15.8k, BW=61.6MiB/s (64.6MB/s)(123MiB/2002msec); 0 zone resets 00:08:50.265 slat (usec): min=4, max=415, avg= 6.29, stdev= 4.25 00:08:50.265 clat (usec): min=1116, max=11772, avg=4054.64, stdev=1410.32 00:08:50.265 lat (usec): min=1121, max=11785, avg=4060.93, stdev=1411.46 00:08:50.265 clat percentiles (usec): 00:08:50.265 | 1.00th=[ 2114], 5.00th=[ 2409], 10.00th=[ 2540], 20.00th=[ 2769], 00:08:50.265 | 30.00th=[ 2999], 40.00th=[ 3261], 50.00th=[ 3654], 60.00th=[ 4228], 00:08:50.265 | 70.00th=[ 4817], 80.00th=[ 5407], 90.00th=[ 6063], 95.00th=[ 6652], 00:08:50.265 | 99.00th=[ 7701], 99.50th=[ 8291], 99.90th=[10028], 99.95th=[10814], 00:08:50.265 | 99.99th=[11207] 00:08:50.265 bw ( KiB/s): min=60216, max=71272, per=100.00%, avg=66152.00, stdev=5572.99, samples=3 00:08:50.265 iops : min=15054, max=17818, avg=16538.00, stdev=1393.25, samples=3 00:08:50.265 lat (msec) : 2=0.47%, 4=55.72%, 10=43.71%, 20=0.11% 00:08:50.265 cpu : usr=97.90%, sys=0.55%, ctx=16, majf=0, minf=606 00:08:50.265 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:50.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:50.265 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:50.265 issued rwts: total=31551,31573,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:50.265 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:50.265 00:08:50.265 Run status group 0 (all jobs): 00:08:50.265 READ: bw=61.6MiB/s (64.6MB/s), 61.6MiB/s-61.6MiB/s (64.6MB/s-64.6MB/s), io=123MiB (129MB), run=2002-2002msec 00:08:50.265 WRITE: bw=61.6MiB/s (64.6MB/s), 61.6MiB/s-61.6MiB/s (64.6MB/s-64.6MB/s), io=123MiB (129MB), run=2002-2002msec 00:08:50.265 ----------------------------------------------------- 00:08:50.265 Suppressions used: 00:08:50.265 count bytes template 00:08:50.265 1 32 /usr/src/fio/parse.c 00:08:50.265 1 8 libtcmalloc_minimal.so 00:08:50.265 ----------------------------------------------------- 00:08:50.265 00:08:50.265 20:19:34 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:50.265 20:19:34 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:08:50.265 00:08:50.265 real 0m24.859s 00:08:50.265 user 0m19.644s 00:08:50.265 sys 0m6.575s 00:08:50.265 20:19:34 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:50.265 20:19:34 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:08:50.265 ************************************ 00:08:50.265 END TEST nvme_fio 00:08:50.265 ************************************ 00:08:50.265 00:08:50.265 real 1m33.718s 00:08:50.265 user 3m39.300s 00:08:50.265 sys 0m16.864s 00:08:50.265 ************************************ 00:08:50.265 END TEST nvme 00:08:50.265 ************************************ 00:08:50.265 20:19:34 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:50.265 20:19:34 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:50.523 20:19:34 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:08:50.523 20:19:34 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:08:50.523 20:19:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:50.523 20:19:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:50.523 20:19:34 -- common/autotest_common.sh@10 -- # set +x 00:08:50.523 ************************************ 00:08:50.523 START TEST nvme_scc 00:08:50.524 ************************************ 00:08:50.524 20:19:34 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:08:50.524 * Looking for test storage... 00:08:50.524 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:50.524 20:19:34 nvme_scc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:50.524 20:19:34 nvme_scc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:50.524 20:19:34 nvme_scc -- common/autotest_common.sh@1711 -- # lcov --version 00:08:50.524 20:19:34 nvme_scc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:50.524 20:19:34 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:50.524 20:19:34 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:50.524 20:19:34 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:50.524 20:19:34 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:08:50.524 20:19:34 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:08:50.524 20:19:34 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:08:50.524 20:19:34 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:08:50.524 20:19:34 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:08:50.524 20:19:34 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:08:50.524 20:19:34 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:08:50.524 20:19:34 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:50.524 20:19:34 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:08:50.524 20:19:34 nvme_scc -- scripts/common.sh@345 -- # : 1 00:08:50.524 20:19:34 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:50.524 20:19:34 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:50.524 20:19:34 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:08:50.524 20:19:34 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:08:50.524 20:19:34 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:50.524 20:19:34 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:08:50.524 20:19:34 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:50.524 20:19:34 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:08:50.524 20:19:34 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:08:50.524 20:19:34 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:50.524 20:19:34 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:08:50.524 20:19:34 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:50.524 20:19:34 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:50.524 20:19:34 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:50.524 20:19:34 nvme_scc -- scripts/common.sh@368 -- # return 0 00:08:50.524 20:19:34 nvme_scc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:50.524 20:19:34 nvme_scc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:50.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.524 --rc genhtml_branch_coverage=1 00:08:50.524 --rc genhtml_function_coverage=1 00:08:50.524 --rc genhtml_legend=1 00:08:50.524 --rc geninfo_all_blocks=1 00:08:50.524 --rc geninfo_unexecuted_blocks=1 00:08:50.524 00:08:50.524 ' 00:08:50.524 20:19:34 nvme_scc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:50.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.524 --rc genhtml_branch_coverage=1 00:08:50.524 --rc genhtml_function_coverage=1 00:08:50.524 --rc genhtml_legend=1 00:08:50.524 --rc geninfo_all_blocks=1 00:08:50.524 --rc geninfo_unexecuted_blocks=1 00:08:50.524 00:08:50.524 ' 00:08:50.524 20:19:34 nvme_scc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:50.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.524 --rc genhtml_branch_coverage=1 00:08:50.524 --rc genhtml_function_coverage=1 00:08:50.524 --rc genhtml_legend=1 00:08:50.524 --rc geninfo_all_blocks=1 00:08:50.524 --rc geninfo_unexecuted_blocks=1 00:08:50.524 00:08:50.524 ' 00:08:50.524 20:19:34 nvme_scc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:50.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.524 --rc genhtml_branch_coverage=1 00:08:50.524 --rc genhtml_function_coverage=1 00:08:50.524 --rc genhtml_legend=1 00:08:50.524 --rc geninfo_all_blocks=1 00:08:50.524 --rc geninfo_unexecuted_blocks=1 00:08:50.524 00:08:50.524 ' 00:08:50.524 20:19:34 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:08:50.524 20:19:34 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:08:50.524 20:19:34 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:08:50.524 20:19:34 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:08:50.524 20:19:34 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:50.524 20:19:34 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:08:50.524 20:19:34 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:50.524 20:19:34 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:50.524 20:19:34 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:50.524 20:19:34 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.524 20:19:34 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.524 20:19:34 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.524 20:19:34 nvme_scc -- paths/export.sh@5 -- # export PATH 00:08:50.524 20:19:34 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.524 20:19:34 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:08:50.524 20:19:34 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:08:50.524 20:19:34 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:08:50.524 20:19:34 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:08:50.524 20:19:34 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:08:50.524 20:19:34 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:08:50.524 20:19:34 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:08:50.524 20:19:34 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:08:50.524 20:19:34 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:08:50.524 20:19:34 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:50.524 20:19:34 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:08:50.524 20:19:34 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:08:50.524 20:19:34 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:08:50.524 20:19:34 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:50.871 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:50.871 Waiting for block devices as requested 00:08:50.871 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:51.128 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:51.128 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:51.128 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:08:56.403 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:08:56.403 20:19:40 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:08:56.403 20:19:40 nvme_scc -- scripts/common.sh@18 -- # local i 00:08:56.403 20:19:40 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:08:56.403 20:19:40 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:56.403 20:19:40 nvme_scc -- scripts/common.sh@27 -- # return 0 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:08:56.403 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:08:56.404 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:08:56.405 20:19:40 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:08:56.406 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.407 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:56.408 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:08:56.409 20:19:40 nvme_scc -- scripts/common.sh@18 -- # local i 00:08:56.409 20:19:40 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:08:56.409 20:19:40 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:56.409 20:19:40 nvme_scc -- scripts/common.sh@27 -- # return 0 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.409 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.410 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.411 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.412 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.413 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.414 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:08:56.415 20:19:40 nvme_scc -- scripts/common.sh@18 -- # local i 00:08:56.415 20:19:40 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:08:56.415 20:19:40 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:56.415 20:19:40 nvme_scc -- scripts/common.sh@27 -- # return 0 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.415 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:08:56.416 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.417 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.418 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.419 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.420 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:08:56.421 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:56.422 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:56.423 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:56.424 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:56.424 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.424 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.424 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:56.424 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:56.424 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:56.424 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.424 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.424 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:56.424 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:56.424 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:56.424 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.424 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.424 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:56.424 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:56.424 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:56.424 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.424 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.424 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:56.424 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:56.424 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:56.424 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.424 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.424 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:56.424 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:56.424 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:56.424 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.424 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.424 20:19:40 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:08:56.424 20:19:40 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:08:56.424 20:19:40 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:08:56.424 20:19:40 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:08:56.424 20:19:40 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:08:56.424 20:19:40 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:08:56.424 20:19:40 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:56.424 20:19:40 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:08:56.424 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.424 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.424 20:19:40 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:08:56.424 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:56.424 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.424 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.424 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:56.424 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:08:56.424 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:08:56.424 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.424 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.424 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:56.424 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:08:56.424 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:08:56.424 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.424 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.424 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:56.424 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:08:56.424 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:08:56.424 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.424 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.424 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:56.424 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:08:56.424 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:08:56.424 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.424 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.424 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:56.424 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:08:56.424 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:08:56.424 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.424 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.424 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:56.424 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:08:56.424 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:08:56.424 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.424 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.424 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:56.424 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:08:56.424 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:08:56.424 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.424 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.424 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:56.424 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:08:56.424 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:08:56.424 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.424 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.424 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.424 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:08:56.424 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:08:56.424 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.424 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.424 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.424 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:08:56.424 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:08:56.424 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.424 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.687 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.687 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:08:56.687 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:08:56.687 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.687 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.687 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.687 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:08:56.687 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:08:56.687 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.687 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.687 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:56.687 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:56.688 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.689 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:08:56.690 20:19:40 nvme_scc -- scripts/common.sh@18 -- # local i 00:08:56.690 20:19:40 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:08:56.690 20:19:40 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:56.690 20:19:40 nvme_scc -- scripts/common.sh@27 -- # return 0 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:08:56.690 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.691 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.692 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:08:56.693 20:19:40 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:08:56.693 20:19:40 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:08:56.694 20:19:40 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:08:56.694 20:19:40 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:08:56.694 20:19:40 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:08:56.694 20:19:40 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:08:56.694 20:19:40 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:08:56.694 20:19:40 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:08:56.694 20:19:40 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:08:56.694 20:19:40 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:08:56.694 20:19:40 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:08:56.694 20:19:40 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:08:56.694 20:19:40 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:08:56.694 20:19:40 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:08:56.694 20:19:40 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:08:56.694 20:19:40 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:08:56.694 20:19:40 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:08:56.694 20:19:40 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:08:56.694 20:19:40 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:08:56.694 20:19:40 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:08:56.694 20:19:40 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:08:56.694 20:19:40 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:08:56.694 20:19:40 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:08:56.694 20:19:40 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:08:56.694 20:19:40 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:08:56.694 20:19:40 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:08:56.694 20:19:40 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:08:56.694 20:19:40 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:08:56.694 20:19:40 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:08:56.694 20:19:40 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:08:56.694 20:19:40 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:08:56.694 20:19:40 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:08:56.694 20:19:40 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:08:56.694 20:19:40 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:08:56.694 20:19:40 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:08:56.694 20:19:40 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:08:56.694 20:19:40 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:08:56.694 20:19:40 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:08:56.694 20:19:40 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:08:56.694 20:19:40 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:08:56.694 20:19:40 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:08:56.694 20:19:40 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:08:56.694 20:19:40 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:08:56.694 20:19:40 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:08:56.694 20:19:40 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:08:56.694 20:19:40 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:08:56.694 20:19:40 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:08:56.694 20:19:40 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:08:56.694 20:19:40 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:08:56.694 20:19:40 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:08:56.694 20:19:40 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:08:56.694 20:19:40 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:08:56.694 20:19:40 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:08:56.694 20:19:40 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:08:56.694 20:19:40 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:08:56.694 20:19:40 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:08:56.694 20:19:40 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:08:56.694 20:19:40 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:08:56.694 20:19:40 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:08:56.694 20:19:40 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:08:56.694 20:19:40 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:08:56.694 20:19:40 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:08:56.694 20:19:40 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:08:56.694 20:19:40 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:08:56.694 20:19:40 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:08:56.694 20:19:40 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:08:56.694 20:19:40 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:08:56.694 20:19:40 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:56.954 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:57.521 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:57.521 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:57.521 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:57.521 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:57.521 20:19:41 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:08:57.521 20:19:41 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:57.521 20:19:41 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:57.521 20:19:41 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:08:57.521 ************************************ 00:08:57.521 START TEST nvme_simple_copy 00:08:57.521 ************************************ 00:08:57.521 20:19:41 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:08:57.779 Initializing NVMe Controllers 00:08:57.779 Attaching to 0000:00:10.0 00:08:57.779 Controller supports SCC. Attached to 0000:00:10.0 00:08:57.779 Namespace ID: 1 size: 6GB 00:08:57.779 Initialization complete. 00:08:57.779 00:08:57.779 Controller QEMU NVMe Ctrl (12340 ) 00:08:57.779 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:08:57.779 Namespace Block Size:4096 00:08:57.779 Writing LBAs 0 to 63 with Random Data 00:08:57.779 Copied LBAs from 0 - 63 to the Destination LBA 256 00:08:57.779 LBAs matching Written Data: 64 00:08:57.779 00:08:57.779 real 0m0.260s 00:08:57.779 user 0m0.090s 00:08:57.779 sys 0m0.069s 00:08:57.779 20:19:41 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:57.779 ************************************ 00:08:57.779 END TEST nvme_simple_copy 00:08:57.779 ************************************ 00:08:57.779 20:19:41 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:08:58.038 00:08:58.038 real 0m7.514s 00:08:58.038 user 0m1.068s 00:08:58.038 sys 0m1.321s 00:08:58.038 20:19:42 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:58.038 20:19:42 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:08:58.038 ************************************ 00:08:58.038 END TEST nvme_scc 00:08:58.038 ************************************ 00:08:58.038 20:19:42 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:08:58.038 20:19:42 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:08:58.038 20:19:42 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:08:58.038 20:19:42 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:08:58.038 20:19:42 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:08:58.038 20:19:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:58.038 20:19:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:58.038 20:19:42 -- common/autotest_common.sh@10 -- # set +x 00:08:58.038 ************************************ 00:08:58.038 START TEST nvme_fdp 00:08:58.038 ************************************ 00:08:58.038 20:19:42 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:08:58.038 * Looking for test storage... 00:08:58.038 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:58.038 20:19:42 nvme_fdp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:58.038 20:19:42 nvme_fdp -- common/autotest_common.sh@1711 -- # lcov --version 00:08:58.038 20:19:42 nvme_fdp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:58.038 20:19:42 nvme_fdp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:58.038 20:19:42 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:58.038 20:19:42 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:58.038 20:19:42 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:58.038 20:19:42 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:08:58.038 20:19:42 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:08:58.038 20:19:42 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:08:58.038 20:19:42 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:08:58.038 20:19:42 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:08:58.038 20:19:42 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:08:58.038 20:19:42 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:08:58.038 20:19:42 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:58.038 20:19:42 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:08:58.038 20:19:42 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:08:58.038 20:19:42 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:58.038 20:19:42 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:58.038 20:19:42 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:08:58.038 20:19:42 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:08:58.038 20:19:42 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:58.038 20:19:42 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:08:58.038 20:19:42 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:08:58.038 20:19:42 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:08:58.038 20:19:42 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:08:58.038 20:19:42 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:58.038 20:19:42 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:08:58.038 20:19:42 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:08:58.038 20:19:42 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:58.038 20:19:42 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:58.038 20:19:42 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:08:58.038 20:19:42 nvme_fdp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:58.038 20:19:42 nvme_fdp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:58.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.038 --rc genhtml_branch_coverage=1 00:08:58.038 --rc genhtml_function_coverage=1 00:08:58.038 --rc genhtml_legend=1 00:08:58.038 --rc geninfo_all_blocks=1 00:08:58.038 --rc geninfo_unexecuted_blocks=1 00:08:58.038 00:08:58.038 ' 00:08:58.038 20:19:42 nvme_fdp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:58.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.038 --rc genhtml_branch_coverage=1 00:08:58.038 --rc genhtml_function_coverage=1 00:08:58.038 --rc genhtml_legend=1 00:08:58.038 --rc geninfo_all_blocks=1 00:08:58.038 --rc geninfo_unexecuted_blocks=1 00:08:58.038 00:08:58.038 ' 00:08:58.038 20:19:42 nvme_fdp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:58.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.038 --rc genhtml_branch_coverage=1 00:08:58.038 --rc genhtml_function_coverage=1 00:08:58.038 --rc genhtml_legend=1 00:08:58.038 --rc geninfo_all_blocks=1 00:08:58.038 --rc geninfo_unexecuted_blocks=1 00:08:58.038 00:08:58.038 ' 00:08:58.038 20:19:42 nvme_fdp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:58.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.038 --rc genhtml_branch_coverage=1 00:08:58.038 --rc genhtml_function_coverage=1 00:08:58.038 --rc genhtml_legend=1 00:08:58.038 --rc geninfo_all_blocks=1 00:08:58.038 --rc geninfo_unexecuted_blocks=1 00:08:58.038 00:08:58.038 ' 00:08:58.038 20:19:42 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:08:58.038 20:19:42 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:08:58.038 20:19:42 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:08:58.038 20:19:42 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:08:58.038 20:19:42 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:58.038 20:19:42 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:08:58.038 20:19:42 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:58.038 20:19:42 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:58.038 20:19:42 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:58.038 20:19:42 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.038 20:19:42 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.038 20:19:42 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.038 20:19:42 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:08:58.038 20:19:42 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.038 20:19:42 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:08:58.038 20:19:42 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:08:58.038 20:19:42 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:08:58.038 20:19:42 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:08:58.038 20:19:42 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:08:58.038 20:19:42 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:08:58.038 20:19:42 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:08:58.038 20:19:42 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:08:58.038 20:19:42 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:08:58.038 20:19:42 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:58.038 20:19:42 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:58.297 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:58.555 Waiting for block devices as requested 00:08:58.555 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:58.555 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:58.813 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:58.813 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:04.092 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:04.092 20:19:47 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:09:04.092 20:19:47 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:04.092 20:19:47 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:04.092 20:19:47 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:04.092 20:19:47 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:04.092 20:19:47 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:04.092 20:19:47 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:04.092 20:19:47 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:04.092 20:19:47 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:04.092 20:19:47 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:04.092 20:19:47 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:04.092 20:19:47 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:04.092 20:19:47 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:04.092 20:19:47 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:04.092 20:19:47 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:04.092 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.092 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.092 20:19:47 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:04.092 20:19:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:04.092 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.092 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.092 20:19:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:04.092 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:04.092 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:04.092 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.092 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.092 20:19:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:04.092 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:04.092 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:04.092 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.092 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.092 20:19:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:04.092 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:04.092 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:04.092 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.092 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.092 20:19:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:04.092 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:04.092 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:04.092 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.092 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.092 20:19:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:04.092 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:04.092 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:04.092 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.092 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.092 20:19:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:04.092 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:04.092 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:04.092 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.092 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.092 20:19:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:04.092 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:04.092 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:04.092 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.092 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.092 20:19:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.092 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:04.092 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:04.092 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.092 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.092 20:19:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:04.092 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:04.092 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:04.092 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.092 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.092 20:19:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.092 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.093 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.094 20:19:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.094 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.094 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:04.094 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:04.094 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.094 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.094 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:04.094 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:04.094 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:04.094 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.094 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.094 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.094 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:04.094 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:04.094 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.094 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.094 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.094 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:04.094 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:04.094 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.094 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.094 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.094 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:04.094 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:04.094 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.094 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.094 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.094 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:04.094 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:04.094 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.094 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.094 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.094 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:04.094 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:04.094 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:09:04.095 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:04.096 20:19:48 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:04.097 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:04.098 20:19:48 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:04.098 20:19:48 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:04.098 20:19:48 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:04.098 20:19:48 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:04.098 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.099 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.100 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.101 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.102 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:04.103 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:04.104 20:19:48 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:04.104 20:19:48 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:04.104 20:19:48 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:04.104 20:19:48 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.104 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:04.105 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:04.106 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:04.107 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:09:04.108 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:09:04.109 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:04.110 20:19:48 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:09:04.111 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:04.112 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:04.113 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.114 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.115 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.116 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:04.117 20:19:48 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:04.117 20:19:48 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:04.117 20:19:48 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:04.117 20:19:48 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.117 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:04.118 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.119 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.120 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:04.120 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:04.120 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.120 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.120 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.120 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:04.120 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:04.120 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.120 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.120 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.120 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:04.120 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:04.120 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.120 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.120 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.120 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:04.120 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:04.120 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.120 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.120 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:04.378 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:04.378 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:04.378 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.378 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.378 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:04.379 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:04.379 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:04.379 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.379 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.379 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:04.379 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:04.379 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:04.379 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.379 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.379 20:19:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:04.379 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:04.379 20:19:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:04.379 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:04.379 20:19:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:04.379 20:19:48 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:04.379 20:19:48 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:04.379 20:19:48 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:04.379 20:19:48 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:04.379 20:19:48 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:04.379 20:19:48 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:04.379 20:19:48 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:09:04.379 20:19:48 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:09:04.379 20:19:48 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:04.379 20:19:48 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:09:04.379 20:19:48 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:04.379 20:19:48 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:09:04.379 20:19:48 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:09:04.379 20:19:48 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:04.379 20:19:48 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:04.379 20:19:48 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:09:04.379 20:19:48 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:09:04.379 20:19:48 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:09:04.379 20:19:48 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:09:04.379 20:19:48 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:09:04.379 20:19:48 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:09:04.379 20:19:48 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:04.379 20:19:48 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:04.379 20:19:48 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:04.379 20:19:48 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:04.379 20:19:48 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:04.379 20:19:48 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:04.379 20:19:48 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:04.379 20:19:48 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:09:04.379 20:19:48 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:09:04.379 20:19:48 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:09:04.379 20:19:48 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:09:04.379 20:19:48 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:09:04.379 20:19:48 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:09:04.379 20:19:48 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:04.379 20:19:48 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:04.379 20:19:48 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:04.379 20:19:48 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:04.379 20:19:48 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:04.379 20:19:48 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:04.379 20:19:48 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:04.379 20:19:48 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:09:04.379 20:19:48 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:09:04.379 20:19:48 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:09:04.379 20:19:48 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:09:04.379 20:19:48 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:09:04.379 20:19:48 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:09:04.379 20:19:48 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:04.379 20:19:48 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:04.379 20:19:48 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:09:04.379 20:19:48 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:09:04.379 20:19:48 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:09:04.379 20:19:48 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:04.379 20:19:48 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:09:04.379 20:19:48 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:04.379 20:19:48 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:09:04.379 20:19:48 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:09:04.379 20:19:48 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:09:04.379 20:19:48 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:09:04.379 20:19:48 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:09:04.379 20:19:48 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:09:04.379 20:19:48 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:04.379 20:19:48 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:04.379 20:19:48 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:04.379 20:19:48 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:04.379 20:19:48 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:04.379 20:19:48 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:04.379 20:19:48 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:09:04.379 20:19:48 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:09:04.379 20:19:48 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:09:04.379 20:19:48 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:09:04.379 20:19:48 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:09:04.379 20:19:48 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:04.636 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:05.202 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:05.202 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:05.202 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:05.202 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:05.202 20:19:49 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:05.202 20:19:49 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:05.202 20:19:49 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:05.202 20:19:49 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:05.202 ************************************ 00:09:05.203 START TEST nvme_flexible_data_placement 00:09:05.203 ************************************ 00:09:05.203 20:19:49 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:05.460 Initializing NVMe Controllers 00:09:05.460 Attaching to 0000:00:13.0 00:09:05.460 Controller supports FDP Attached to 0000:00:13.0 00:09:05.460 Namespace ID: 1 Endurance Group ID: 1 00:09:05.460 Initialization complete. 00:09:05.460 00:09:05.460 ================================== 00:09:05.460 == FDP tests for Namespace: #01 == 00:09:05.460 ================================== 00:09:05.460 00:09:05.460 Get Feature: FDP: 00:09:05.460 ================= 00:09:05.460 Enabled: Yes 00:09:05.460 FDP configuration Index: 0 00:09:05.460 00:09:05.460 FDP configurations log page 00:09:05.460 =========================== 00:09:05.460 Number of FDP configurations: 1 00:09:05.460 Version: 0 00:09:05.460 Size: 112 00:09:05.460 FDP Configuration Descriptor: 0 00:09:05.460 Descriptor Size: 96 00:09:05.460 Reclaim Group Identifier format: 2 00:09:05.460 FDP Volatile Write Cache: Not Present 00:09:05.460 FDP Configuration: Valid 00:09:05.460 Vendor Specific Size: 0 00:09:05.460 Number of Reclaim Groups: 2 00:09:05.460 Number of Recalim Unit Handles: 8 00:09:05.460 Max Placement Identifiers: 128 00:09:05.460 Number of Namespaces Suppprted: 256 00:09:05.460 Reclaim unit Nominal Size: 6000000 bytes 00:09:05.460 Estimated Reclaim Unit Time Limit: Not Reported 00:09:05.460 RUH Desc #000: RUH Type: Initially Isolated 00:09:05.460 RUH Desc #001: RUH Type: Initially Isolated 00:09:05.460 RUH Desc #002: RUH Type: Initially Isolated 00:09:05.460 RUH Desc #003: RUH Type: Initially Isolated 00:09:05.460 RUH Desc #004: RUH Type: Initially Isolated 00:09:05.460 RUH Desc #005: RUH Type: Initially Isolated 00:09:05.460 RUH Desc #006: RUH Type: Initially Isolated 00:09:05.460 RUH Desc #007: RUH Type: Initially Isolated 00:09:05.460 00:09:05.461 FDP reclaim unit handle usage log page 00:09:05.461 ====================================== 00:09:05.461 Number of Reclaim Unit Handles: 8 00:09:05.461 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:05.461 RUH Usage Desc #001: RUH Attributes: Unused 00:09:05.461 RUH Usage Desc #002: RUH Attributes: Unused 00:09:05.461 RUH Usage Desc #003: RUH Attributes: Unused 00:09:05.461 RUH Usage Desc #004: RUH Attributes: Unused 00:09:05.461 RUH Usage Desc #005: RUH Attributes: Unused 00:09:05.461 RUH Usage Desc #006: RUH Attributes: Unused 00:09:05.461 RUH Usage Desc #007: RUH Attributes: Unused 00:09:05.461 00:09:05.461 FDP statistics log page 00:09:05.461 ======================= 00:09:05.461 Host bytes with metadata written: 1039007744 00:09:05.461 Media bytes with metadata written: 1039265792 00:09:05.461 Media bytes erased: 0 00:09:05.461 00:09:05.461 FDP Reclaim unit handle status 00:09:05.461 ============================== 00:09:05.461 Number of RUHS descriptors: 2 00:09:05.461 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000004120 00:09:05.461 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:09:05.461 00:09:05.461 FDP write on placement id: 0 success 00:09:05.461 00:09:05.461 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:09:05.461 00:09:05.461 IO mgmt send: RUH update for Placement ID: #0 Success 00:09:05.461 00:09:05.461 Get Feature: FDP Events for Placement handle: #0 00:09:05.461 ======================== 00:09:05.461 Number of FDP Events: 6 00:09:05.461 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:09:05.461 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:09:05.461 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:09:05.461 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:09:05.461 FDP Event: #4 Type: Media Reallocated Enabled: No 00:09:05.461 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:09:05.461 00:09:05.461 FDP events log page 00:09:05.461 =================== 00:09:05.461 Number of FDP events: 1 00:09:05.461 FDP Event #0: 00:09:05.461 Event Type: RU Not Written to Capacity 00:09:05.461 Placement Identifier: Valid 00:09:05.461 NSID: Valid 00:09:05.461 Location: Valid 00:09:05.461 Placement Identifier: 0 00:09:05.461 Event Timestamp: 5 00:09:05.461 Namespace Identifier: 1 00:09:05.461 Reclaim Group Identifier: 0 00:09:05.461 Reclaim Unit Handle Identifier: 0 00:09:05.461 00:09:05.461 FDP test passed 00:09:05.461 00:09:05.461 real 0m0.232s 00:09:05.461 user 0m0.073s 00:09:05.461 sys 0m0.058s 00:09:05.461 20:19:49 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:05.461 20:19:49 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:09:05.461 ************************************ 00:09:05.461 END TEST nvme_flexible_data_placement 00:09:05.461 ************************************ 00:09:05.461 00:09:05.461 real 0m7.591s 00:09:05.461 user 0m1.076s 00:09:05.461 sys 0m1.338s 00:09:05.461 20:19:49 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:05.461 20:19:49 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:05.461 ************************************ 00:09:05.461 END TEST nvme_fdp 00:09:05.461 ************************************ 00:09:05.461 20:19:49 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:09:05.461 20:19:49 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:05.461 20:19:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:05.461 20:19:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:05.461 20:19:49 -- common/autotest_common.sh@10 -- # set +x 00:09:05.461 ************************************ 00:09:05.461 START TEST nvme_rpc 00:09:05.461 ************************************ 00:09:05.461 20:19:49 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:05.719 * Looking for test storage... 00:09:05.719 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:05.719 20:19:49 nvme_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:05.719 20:19:49 nvme_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:09:05.719 20:19:49 nvme_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:05.719 20:19:49 nvme_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:05.719 20:19:49 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:05.719 20:19:49 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:05.719 20:19:49 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:05.719 20:19:49 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:05.719 20:19:49 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:05.719 20:19:49 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:05.719 20:19:49 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:05.719 20:19:49 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:05.719 20:19:49 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:05.719 20:19:49 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:05.719 20:19:49 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:05.719 20:19:49 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:05.719 20:19:49 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:09:05.719 20:19:49 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:05.719 20:19:49 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:05.719 20:19:49 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:05.719 20:19:49 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:09:05.719 20:19:49 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:05.719 20:19:49 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:09:05.719 20:19:49 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:05.719 20:19:49 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:05.719 20:19:49 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:09:05.719 20:19:49 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:05.719 20:19:49 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:09:05.719 20:19:49 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:05.719 20:19:49 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:05.720 20:19:49 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:05.720 20:19:49 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:09:05.720 20:19:49 nvme_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:05.720 20:19:49 nvme_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:05.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.720 --rc genhtml_branch_coverage=1 00:09:05.720 --rc genhtml_function_coverage=1 00:09:05.720 --rc genhtml_legend=1 00:09:05.720 --rc geninfo_all_blocks=1 00:09:05.720 --rc geninfo_unexecuted_blocks=1 00:09:05.720 00:09:05.720 ' 00:09:05.720 20:19:49 nvme_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:05.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.720 --rc genhtml_branch_coverage=1 00:09:05.720 --rc genhtml_function_coverage=1 00:09:05.720 --rc genhtml_legend=1 00:09:05.720 --rc geninfo_all_blocks=1 00:09:05.720 --rc geninfo_unexecuted_blocks=1 00:09:05.720 00:09:05.720 ' 00:09:05.720 20:19:49 nvme_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:05.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.720 --rc genhtml_branch_coverage=1 00:09:05.720 --rc genhtml_function_coverage=1 00:09:05.720 --rc genhtml_legend=1 00:09:05.720 --rc geninfo_all_blocks=1 00:09:05.720 --rc geninfo_unexecuted_blocks=1 00:09:05.720 00:09:05.720 ' 00:09:05.720 20:19:49 nvme_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:05.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.720 --rc genhtml_branch_coverage=1 00:09:05.720 --rc genhtml_function_coverage=1 00:09:05.720 --rc genhtml_legend=1 00:09:05.720 --rc geninfo_all_blocks=1 00:09:05.720 --rc geninfo_unexecuted_blocks=1 00:09:05.720 00:09:05.720 ' 00:09:05.720 20:19:49 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:05.720 20:19:49 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:09:05.720 20:19:49 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:09:05.720 20:19:49 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:09:05.720 20:19:49 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:09:05.720 20:19:49 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:09:05.720 20:19:49 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:05.720 20:19:49 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:09:05.720 20:19:49 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:05.720 20:19:49 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:05.720 20:19:49 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:05.720 20:19:49 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:05.720 20:19:49 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:05.720 20:19:49 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:09:05.720 20:19:49 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:09:05.720 20:19:49 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=67582 00:09:05.720 20:19:49 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:05.720 20:19:49 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:09:05.720 20:19:49 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 67582 00:09:05.720 20:19:49 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 67582 ']' 00:09:05.720 20:19:49 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:05.720 20:19:49 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:05.720 20:19:49 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:05.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:05.720 20:19:49 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:05.720 20:19:49 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.720 [2024-12-12 20:19:49.941210] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:09:05.720 [2024-12-12 20:19:49.941322] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67582 ] 00:09:05.978 [2024-12-12 20:19:50.098361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:05.978 [2024-12-12 20:19:50.193321] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:05.978 [2024-12-12 20:19:50.193397] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.912 20:19:50 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:06.912 20:19:50 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:06.912 20:19:50 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:09:06.912 Nvme0n1 00:09:06.912 20:19:51 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:09:06.912 20:19:51 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:09:07.170 request: 00:09:07.170 { 00:09:07.170 "bdev_name": "Nvme0n1", 00:09:07.170 "filename": "non_existing_file", 00:09:07.170 "method": "bdev_nvme_apply_firmware", 00:09:07.170 "req_id": 1 00:09:07.170 } 00:09:07.170 Got JSON-RPC error response 00:09:07.170 response: 00:09:07.170 { 00:09:07.170 "code": -32603, 00:09:07.170 "message": "open file failed." 00:09:07.170 } 00:09:07.170 20:19:51 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:09:07.170 20:19:51 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:09:07.170 20:19:51 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:09:07.428 20:19:51 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:07.428 20:19:51 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 67582 00:09:07.428 20:19:51 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 67582 ']' 00:09:07.428 20:19:51 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 67582 00:09:07.428 20:19:51 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:09:07.428 20:19:51 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:07.428 20:19:51 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67582 00:09:07.428 20:19:51 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:07.428 20:19:51 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:07.428 killing process with pid 67582 00:09:07.428 20:19:51 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67582' 00:09:07.428 20:19:51 nvme_rpc -- common/autotest_common.sh@973 -- # kill 67582 00:09:07.428 20:19:51 nvme_rpc -- common/autotest_common.sh@978 -- # wait 67582 00:09:08.801 00:09:08.801 real 0m3.292s 00:09:08.801 user 0m6.306s 00:09:08.801 sys 0m0.470s 00:09:08.801 20:19:52 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:08.801 20:19:52 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:08.801 ************************************ 00:09:08.801 END TEST nvme_rpc 00:09:08.801 ************************************ 00:09:08.801 20:19:53 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:08.801 20:19:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:08.801 20:19:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:08.801 20:19:53 -- common/autotest_common.sh@10 -- # set +x 00:09:09.059 ************************************ 00:09:09.059 START TEST nvme_rpc_timeouts 00:09:09.059 ************************************ 00:09:09.059 20:19:53 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:09.059 * Looking for test storage... 00:09:09.059 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:09.059 20:19:53 nvme_rpc_timeouts -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:09.059 20:19:53 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lcov --version 00:09:09.059 20:19:53 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:09.059 20:19:53 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:09.059 20:19:53 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:09.059 20:19:53 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:09.059 20:19:53 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:09.059 20:19:53 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:09:09.059 20:19:53 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:09:09.059 20:19:53 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:09:09.059 20:19:53 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:09:09.059 20:19:53 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:09:09.059 20:19:53 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:09:09.059 20:19:53 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:09:09.059 20:19:53 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:09.059 20:19:53 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:09:09.059 20:19:53 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:09:09.059 20:19:53 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:09.059 20:19:53 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:09.059 20:19:53 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:09:09.059 20:19:53 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:09:09.059 20:19:53 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:09.059 20:19:53 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:09:09.059 20:19:53 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:09:09.059 20:19:53 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:09:09.059 20:19:53 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:09:09.059 20:19:53 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:09.059 20:19:53 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:09:09.059 20:19:53 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:09:09.059 20:19:53 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:09.059 20:19:53 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:09.059 20:19:53 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:09:09.059 20:19:53 nvme_rpc_timeouts -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:09.059 20:19:53 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:09.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.059 --rc genhtml_branch_coverage=1 00:09:09.059 --rc genhtml_function_coverage=1 00:09:09.059 --rc genhtml_legend=1 00:09:09.059 --rc geninfo_all_blocks=1 00:09:09.059 --rc geninfo_unexecuted_blocks=1 00:09:09.059 00:09:09.059 ' 00:09:09.059 20:19:53 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:09.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.059 --rc genhtml_branch_coverage=1 00:09:09.059 --rc genhtml_function_coverage=1 00:09:09.059 --rc genhtml_legend=1 00:09:09.059 --rc geninfo_all_blocks=1 00:09:09.059 --rc geninfo_unexecuted_blocks=1 00:09:09.059 00:09:09.059 ' 00:09:09.059 20:19:53 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:09.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.059 --rc genhtml_branch_coverage=1 00:09:09.059 --rc genhtml_function_coverage=1 00:09:09.059 --rc genhtml_legend=1 00:09:09.059 --rc geninfo_all_blocks=1 00:09:09.059 --rc geninfo_unexecuted_blocks=1 00:09:09.059 00:09:09.059 ' 00:09:09.059 20:19:53 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:09.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.059 --rc genhtml_branch_coverage=1 00:09:09.059 --rc genhtml_function_coverage=1 00:09:09.059 --rc genhtml_legend=1 00:09:09.059 --rc geninfo_all_blocks=1 00:09:09.059 --rc geninfo_unexecuted_blocks=1 00:09:09.059 00:09:09.059 ' 00:09:09.059 20:19:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:09.059 20:19:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_67647 00:09:09.059 20:19:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_67647 00:09:09.059 20:19:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=67679 00:09:09.059 20:19:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:09:09.059 20:19:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 67679 00:09:09.059 20:19:53 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 67679 ']' 00:09:09.059 20:19:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:09.059 20:19:53 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:09.059 20:19:53 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:09.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:09.059 20:19:53 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:09.059 20:19:53 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:09.059 20:19:53 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:09.317 [2024-12-12 20:19:53.287706] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:09:09.317 [2024-12-12 20:19:53.287870] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67679 ] 00:09:09.317 [2024-12-12 20:19:53.465059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:09.575 [2024-12-12 20:19:53.560103] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:09.575 [2024-12-12 20:19:53.560185] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.140 20:19:54 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:10.140 20:19:54 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:09:10.140 Checking default timeout settings: 00:09:10.140 20:19:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:09:10.140 20:19:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:10.398 Making settings changes with rpc: 00:09:10.398 20:19:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:09:10.398 20:19:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:09:10.656 Check default vs. modified settings: 00:09:10.656 20:19:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:09:10.656 20:19:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:10.914 20:19:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:09:10.914 20:19:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:10.914 20:19:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:10.914 20:19:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_67647 00:09:10.914 20:19:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:10.914 20:19:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:09:10.914 20:19:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_67647 00:09:10.914 20:19:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:10.914 20:19:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:10.914 20:19:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:09:10.914 20:19:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:09:10.914 Setting action_on_timeout is changed as expected. 00:09:10.914 20:19:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:09:10.914 20:19:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:10.914 20:19:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_67647 00:09:10.914 20:19:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:10.914 20:19:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:10.914 20:19:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:10.914 20:19:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_67647 00:09:10.914 20:19:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:10.914 20:19:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:10.914 Setting timeout_us is changed as expected. 00:09:10.914 20:19:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:09:10.914 20:19:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:09:10.914 20:19:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:09:10.914 20:19:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:10.914 20:19:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_67647 00:09:10.914 20:19:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:10.914 20:19:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:10.914 20:19:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:10.914 20:19:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_67647 00:09:10.914 20:19:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:10.914 20:19:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:10.914 Setting timeout_admin_us is changed as expected. 00:09:10.914 20:19:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:09:10.914 20:19:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:09:10.914 20:19:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:09:10.914 20:19:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:09:10.914 20:19:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_67647 /tmp/settings_modified_67647 00:09:10.914 20:19:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 67679 00:09:10.914 20:19:55 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 67679 ']' 00:09:10.914 20:19:55 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 67679 00:09:10.914 20:19:55 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:09:10.914 20:19:55 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:10.914 20:19:55 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67679 00:09:10.914 20:19:55 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:10.914 20:19:55 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:10.914 killing process with pid 67679 00:09:10.914 20:19:55 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67679' 00:09:10.914 20:19:55 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 67679 00:09:10.914 20:19:55 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 67679 00:09:12.286 RPC TIMEOUT SETTING TEST PASSED. 00:09:12.287 20:19:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:09:12.287 00:09:12.287 real 0m3.309s 00:09:12.287 user 0m6.436s 00:09:12.287 sys 0m0.511s 00:09:12.287 20:19:56 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:12.287 ************************************ 00:09:12.287 END TEST nvme_rpc_timeouts 00:09:12.287 ************************************ 00:09:12.287 20:19:56 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:12.287 20:19:56 -- spdk/autotest.sh@239 -- # uname -s 00:09:12.287 20:19:56 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:09:12.287 20:19:56 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:12.287 20:19:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:12.287 20:19:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:12.287 20:19:56 -- common/autotest_common.sh@10 -- # set +x 00:09:12.287 ************************************ 00:09:12.287 START TEST sw_hotplug 00:09:12.287 ************************************ 00:09:12.287 20:19:56 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:12.287 * Looking for test storage... 00:09:12.287 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:12.287 20:19:56 sw_hotplug -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:12.287 20:19:56 sw_hotplug -- common/autotest_common.sh@1711 -- # lcov --version 00:09:12.287 20:19:56 sw_hotplug -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:12.545 20:19:56 sw_hotplug -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:12.545 20:19:56 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:12.545 20:19:56 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:12.545 20:19:56 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:12.545 20:19:56 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:09:12.545 20:19:56 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:09:12.545 20:19:56 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:09:12.545 20:19:56 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:09:12.545 20:19:56 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:09:12.545 20:19:56 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:09:12.545 20:19:56 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:09:12.545 20:19:56 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:12.545 20:19:56 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:09:12.545 20:19:56 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:09:12.545 20:19:56 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:12.545 20:19:56 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:12.545 20:19:56 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:09:12.545 20:19:56 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:09:12.545 20:19:56 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:12.545 20:19:56 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:09:12.545 20:19:56 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:09:12.545 20:19:56 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:09:12.545 20:19:56 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:09:12.545 20:19:56 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:12.545 20:19:56 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:09:12.545 20:19:56 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:09:12.545 20:19:56 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:12.545 20:19:56 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:12.545 20:19:56 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:09:12.545 20:19:56 sw_hotplug -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:12.545 20:19:56 sw_hotplug -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:12.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.545 --rc genhtml_branch_coverage=1 00:09:12.545 --rc genhtml_function_coverage=1 00:09:12.545 --rc genhtml_legend=1 00:09:12.545 --rc geninfo_all_blocks=1 00:09:12.545 --rc geninfo_unexecuted_blocks=1 00:09:12.545 00:09:12.545 ' 00:09:12.545 20:19:56 sw_hotplug -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:12.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.545 --rc genhtml_branch_coverage=1 00:09:12.545 --rc genhtml_function_coverage=1 00:09:12.545 --rc genhtml_legend=1 00:09:12.545 --rc geninfo_all_blocks=1 00:09:12.545 --rc geninfo_unexecuted_blocks=1 00:09:12.545 00:09:12.545 ' 00:09:12.545 20:19:56 sw_hotplug -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:12.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.545 --rc genhtml_branch_coverage=1 00:09:12.545 --rc genhtml_function_coverage=1 00:09:12.545 --rc genhtml_legend=1 00:09:12.545 --rc geninfo_all_blocks=1 00:09:12.545 --rc geninfo_unexecuted_blocks=1 00:09:12.545 00:09:12.545 ' 00:09:12.545 20:19:56 sw_hotplug -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:12.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.545 --rc genhtml_branch_coverage=1 00:09:12.545 --rc genhtml_function_coverage=1 00:09:12.545 --rc genhtml_legend=1 00:09:12.545 --rc geninfo_all_blocks=1 00:09:12.545 --rc geninfo_unexecuted_blocks=1 00:09:12.545 00:09:12.545 ' 00:09:12.545 20:19:56 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:12.804 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:12.804 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:12.804 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:12.804 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:12.804 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:12.804 20:19:56 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:09:12.804 20:19:56 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:09:12.804 20:19:56 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:09:12.804 20:19:56 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:09:12.804 20:19:56 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:09:12.804 20:19:56 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:09:12.804 20:19:56 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:09:12.804 20:19:56 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:09:12.804 20:19:56 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:09:12.804 20:19:56 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:09:12.804 20:19:56 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:09:12.804 20:19:56 sw_hotplug -- scripts/common.sh@233 -- # local class 00:09:12.804 20:19:56 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:09:12.804 20:19:56 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:09:12.804 20:19:56 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:09:12.804 20:19:56 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:09:12.804 20:19:56 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:09:12.804 20:19:56 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:09:12.804 20:19:56 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:09:12.804 20:19:56 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:09:12.804 20:19:56 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:09:12.804 20:19:56 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:09:12.804 20:19:56 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:09:12.804 20:19:56 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:09:12.804 20:19:56 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:09:12.804 20:19:56 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:09:12.804 20:19:56 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:12.804 20:19:56 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:09:12.804 20:19:56 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:12.804 20:19:56 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:12.804 20:19:56 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:12.804 20:19:56 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:12.804 20:19:56 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:09:12.804 20:19:56 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:12.804 20:19:56 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:09:12.804 20:19:56 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:12.804 20:19:56 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:12.804 20:19:56 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:12.804 20:19:56 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:12.804 20:19:56 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:09:12.804 20:19:56 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:12.804 20:19:56 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:09:12.804 20:19:56 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:12.804 20:19:56 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:12.804 20:19:56 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:12.804 20:19:56 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:12.804 20:19:56 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:09:12.804 20:19:56 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:12.804 20:19:56 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:09:12.804 20:19:56 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:12.804 20:19:56 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:12.804 20:19:56 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:12.804 20:19:56 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:12.804 20:19:56 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:09:12.804 20:19:56 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:12.804 20:19:56 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:09:12.804 20:19:56 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:12.804 20:19:56 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:12.804 20:19:56 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:12.804 20:19:56 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:12.804 20:19:56 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:09:12.804 20:19:56 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:12.804 20:19:56 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:12.804 20:19:56 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:12.804 20:19:56 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:12.804 20:19:56 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:09:12.804 20:19:56 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:12.804 20:19:56 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:12.804 20:19:56 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:12.804 20:19:56 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:12.804 20:19:56 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:09:12.804 20:19:56 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:12.804 20:19:56 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:12.804 20:19:56 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:12.804 20:19:56 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:09:12.804 20:19:56 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:12.804 20:19:56 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:09:12.804 20:19:56 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:09:12.804 20:19:56 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:13.062 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:13.320 Waiting for block devices as requested 00:09:13.320 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:13.320 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:13.320 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:13.578 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:18.853 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:18.854 20:20:02 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:09:18.854 20:20:02 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:18.854 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:09:18.854 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:18.854 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:09:19.114 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:09:19.374 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:19.374 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:19.374 20:20:03 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:09:19.375 20:20:03 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:09:19.635 20:20:03 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:09:19.635 20:20:03 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:09:19.635 20:20:03 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=68530 00:09:19.635 20:20:03 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:09:19.635 20:20:03 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:09:19.635 20:20:03 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:09:19.635 20:20:03 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:09:19.635 20:20:03 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:09:19.635 20:20:03 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:09:19.635 20:20:03 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:09:19.635 20:20:03 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:09:19.635 20:20:03 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:09:19.635 20:20:03 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:09:19.635 20:20:03 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:09:19.635 20:20:03 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:09:19.635 20:20:03 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:09:19.635 20:20:03 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:09:19.635 Initializing NVMe Controllers 00:09:19.635 Attaching to 0000:00:10.0 00:09:19.635 Attaching to 0000:00:11.0 00:09:19.635 Attached to 0000:00:10.0 00:09:19.635 Attached to 0000:00:11.0 00:09:19.635 Initialization complete. Starting I/O... 00:09:19.635 QEMU NVMe Ctrl (12340 ): 1 I/Os completed (+1) 00:09:19.635 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:09:19.635 00:09:20.643 QEMU NVMe Ctrl (12340 ): 2645 I/Os completed (+2644) 00:09:20.643 QEMU NVMe Ctrl (12341 ): 2581 I/Os completed (+2581) 00:09:20.643 00:09:22.028 QEMU NVMe Ctrl (12340 ): 5998 I/Os completed (+3353) 00:09:22.028 QEMU NVMe Ctrl (12341 ): 5801 I/Os completed (+3220) 00:09:22.028 00:09:22.967 QEMU NVMe Ctrl (12340 ): 9595 I/Os completed (+3597) 00:09:22.967 QEMU NVMe Ctrl (12341 ): 9358 I/Os completed (+3557) 00:09:22.967 00:09:23.900 QEMU NVMe Ctrl (12340 ): 12799 I/Os completed (+3204) 00:09:23.900 QEMU NVMe Ctrl (12341 ): 12538 I/Os completed (+3180) 00:09:23.900 00:09:24.834 QEMU NVMe Ctrl (12340 ): 16049 I/Os completed (+3250) 00:09:24.834 QEMU NVMe Ctrl (12341 ): 15635 I/Os completed (+3097) 00:09:24.834 00:09:25.412 20:20:09 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:25.412 20:20:09 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:25.412 20:20:09 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:25.412 [2024-12-12 20:20:09.633035] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:09:25.412 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:25.412 [2024-12-12 20:20:09.633998] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:25.412 [2024-12-12 20:20:09.634037] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:25.412 [2024-12-12 20:20:09.634052] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:25.412 [2024-12-12 20:20:09.634070] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:25.412 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:25.412 [2024-12-12 20:20:09.635614] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:25.412 [2024-12-12 20:20:09.635654] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:25.412 [2024-12-12 20:20:09.635665] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:25.412 [2024-12-12 20:20:09.635677] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:25.669 20:20:09 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:25.669 20:20:09 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:25.669 [2024-12-12 20:20:09.654460] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:09:25.669 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:25.669 [2024-12-12 20:20:09.655326] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:25.669 [2024-12-12 20:20:09.655361] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:25.669 [2024-12-12 20:20:09.655378] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:25.669 [2024-12-12 20:20:09.655393] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:25.669 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:25.669 [2024-12-12 20:20:09.656756] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:25.669 [2024-12-12 20:20:09.656785] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:25.669 [2024-12-12 20:20:09.656798] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:25.669 [2024-12-12 20:20:09.656808] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:25.669 20:20:09 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:25.669 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:09:25.669 EAL: Scan for (pci) bus failed. 00:09:25.669 20:20:09 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:25.669 20:20:09 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:25.669 20:20:09 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:25.669 20:20:09 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:25.669 20:20:09 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:25.669 00:09:25.669 20:20:09 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:25.669 20:20:09 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:25.669 20:20:09 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:25.669 20:20:09 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:25.669 Attaching to 0000:00:10.0 00:09:25.669 Attached to 0000:00:10.0 00:09:25.926 20:20:09 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:25.926 20:20:09 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:25.926 20:20:09 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:09:25.926 Attaching to 0000:00:11.0 00:09:25.926 Attached to 0000:00:11.0 00:09:26.857 QEMU NVMe Ctrl (12340 ): 3283 I/Os completed (+3283) 00:09:26.857 QEMU NVMe Ctrl (12341 ): 3013 I/Os completed (+3013) 00:09:26.857 00:09:27.789 QEMU NVMe Ctrl (12340 ): 6913 I/Os completed (+3630) 00:09:27.789 QEMU NVMe Ctrl (12341 ): 6640 I/Os completed (+3627) 00:09:27.789 00:09:28.725 QEMU NVMe Ctrl (12340 ): 10638 I/Os completed (+3725) 00:09:28.725 QEMU NVMe Ctrl (12341 ): 10354 I/Os completed (+3714) 00:09:28.725 00:09:29.654 QEMU NVMe Ctrl (12340 ): 14376 I/Os completed (+3738) 00:09:29.654 QEMU NVMe Ctrl (12341 ): 14049 I/Os completed (+3695) 00:09:29.654 00:09:31.026 QEMU NVMe Ctrl (12340 ): 17624 I/Os completed (+3248) 00:09:31.027 QEMU NVMe Ctrl (12341 ): 17293 I/Os completed (+3244) 00:09:31.027 00:09:31.957 QEMU NVMe Ctrl (12340 ): 21289 I/Os completed (+3665) 00:09:31.957 QEMU NVMe Ctrl (12341 ): 20966 I/Os completed (+3673) 00:09:31.957 00:09:32.892 QEMU NVMe Ctrl (12340 ): 24672 I/Os completed (+3383) 00:09:32.892 QEMU NVMe Ctrl (12341 ): 24337 I/Os completed (+3371) 00:09:32.892 00:09:33.827 QEMU NVMe Ctrl (12340 ): 28015 I/Os completed (+3343) 00:09:33.827 QEMU NVMe Ctrl (12341 ): 27690 I/Os completed (+3353) 00:09:33.827 00:09:34.769 QEMU NVMe Ctrl (12340 ): 31635 I/Os completed (+3620) 00:09:34.769 QEMU NVMe Ctrl (12341 ): 31313 I/Os completed (+3623) 00:09:34.769 00:09:35.704 QEMU NVMe Ctrl (12340 ): 35114 I/Os completed (+3479) 00:09:35.704 QEMU NVMe Ctrl (12341 ): 34832 I/Os completed (+3519) 00:09:35.704 00:09:36.639 QEMU NVMe Ctrl (12340 ): 38513 I/Os completed (+3399) 00:09:36.639 QEMU NVMe Ctrl (12341 ): 38233 I/Os completed (+3401) 00:09:36.639 00:09:38.014 QEMU NVMe Ctrl (12340 ): 41797 I/Os completed (+3284) 00:09:38.014 QEMU NVMe Ctrl (12341 ): 41566 I/Os completed (+3333) 00:09:38.014 00:09:38.014 20:20:21 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:09:38.014 20:20:21 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:38.014 20:20:21 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:38.014 20:20:21 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:38.014 [2024-12-12 20:20:21.927759] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:09:38.014 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:38.014 [2024-12-12 20:20:21.928934] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:38.014 [2024-12-12 20:20:21.928985] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:38.014 [2024-12-12 20:20:21.929003] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:38.014 [2024-12-12 20:20:21.929019] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:38.014 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:38.014 [2024-12-12 20:20:21.931160] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:38.014 [2024-12-12 20:20:21.931220] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:38.014 [2024-12-12 20:20:21.931236] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:38.014 [2024-12-12 20:20:21.931250] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:38.014 20:20:21 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:38.014 20:20:21 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:38.014 [2024-12-12 20:20:21.953064] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:09:38.014 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:38.014 [2024-12-12 20:20:21.954128] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:38.014 [2024-12-12 20:20:21.954170] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:38.014 [2024-12-12 20:20:21.954191] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:38.014 [2024-12-12 20:20:21.954206] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:38.014 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:38.014 [2024-12-12 20:20:21.955862] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:38.014 [2024-12-12 20:20:21.955898] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:38.014 [2024-12-12 20:20:21.955913] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:38.014 [2024-12-12 20:20:21.955927] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:38.014 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:09:38.014 EAL: Scan for (pci) bus failed. 00:09:38.014 20:20:21 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:38.014 20:20:21 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:38.014 20:20:22 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:38.014 20:20:22 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:38.014 20:20:22 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:38.014 20:20:22 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:38.014 20:20:22 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:38.014 20:20:22 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:38.014 20:20:22 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:38.014 20:20:22 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:38.014 Attaching to 0000:00:10.0 00:09:38.014 Attached to 0000:00:10.0 00:09:38.272 20:20:22 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:38.272 20:20:22 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:38.272 20:20:22 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:09:38.272 Attaching to 0000:00:11.0 00:09:38.272 Attached to 0000:00:11.0 00:09:38.838 QEMU NVMe Ctrl (12340 ): 2012 I/Os completed (+2012) 00:09:38.838 QEMU NVMe Ctrl (12341 ): 1712 I/Os completed (+1712) 00:09:38.838 00:09:39.773 QEMU NVMe Ctrl (12340 ): 5237 I/Os completed (+3225) 00:09:39.773 QEMU NVMe Ctrl (12341 ): 4944 I/Os completed (+3232) 00:09:39.773 00:09:40.707 QEMU NVMe Ctrl (12340 ): 8509 I/Os completed (+3272) 00:09:40.707 QEMU NVMe Ctrl (12341 ): 8208 I/Os completed (+3264) 00:09:40.707 00:09:41.639 QEMU NVMe Ctrl (12340 ): 12186 I/Os completed (+3677) 00:09:41.639 QEMU NVMe Ctrl (12341 ): 11865 I/Os completed (+3657) 00:09:41.639 00:09:43.013 QEMU NVMe Ctrl (12340 ): 15896 I/Os completed (+3710) 00:09:43.013 QEMU NVMe Ctrl (12341 ): 15558 I/Os completed (+3693) 00:09:43.013 00:09:43.635 QEMU NVMe Ctrl (12340 ): 19564 I/Os completed (+3668) 00:09:43.635 QEMU NVMe Ctrl (12341 ): 19250 I/Os completed (+3692) 00:09:43.635 00:09:45.015 QEMU NVMe Ctrl (12340 ): 23233 I/Os completed (+3669) 00:09:45.015 QEMU NVMe Ctrl (12341 ): 22933 I/Os completed (+3683) 00:09:45.015 00:09:45.957 QEMU NVMe Ctrl (12340 ): 27112 I/Os completed (+3879) 00:09:45.957 QEMU NVMe Ctrl (12341 ): 26799 I/Os completed (+3866) 00:09:45.957 00:09:46.891 QEMU NVMe Ctrl (12340 ): 30756 I/Os completed (+3644) 00:09:46.891 QEMU NVMe Ctrl (12341 ): 30457 I/Os completed (+3658) 00:09:46.891 00:09:47.825 QEMU NVMe Ctrl (12340 ): 34225 I/Os completed (+3469) 00:09:47.825 QEMU NVMe Ctrl (12341 ): 33925 I/Os completed (+3468) 00:09:47.825 00:09:48.763 QEMU NVMe Ctrl (12340 ): 37689 I/Os completed (+3464) 00:09:48.763 QEMU NVMe Ctrl (12341 ): 37395 I/Os completed (+3470) 00:09:48.763 00:09:49.698 QEMU NVMe Ctrl (12340 ): 41226 I/Os completed (+3537) 00:09:49.698 QEMU NVMe Ctrl (12341 ): 41000 I/Os completed (+3605) 00:09:49.698 00:09:50.266 20:20:34 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:09:50.266 20:20:34 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:50.266 20:20:34 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:50.266 20:20:34 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:50.266 [2024-12-12 20:20:34.293300] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:09:50.266 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:50.266 [2024-12-12 20:20:34.294547] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:50.266 [2024-12-12 20:20:34.294598] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:50.266 [2024-12-12 20:20:34.294617] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:50.266 [2024-12-12 20:20:34.294633] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:50.266 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:50.266 [2024-12-12 20:20:34.297107] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:50.266 [2024-12-12 20:20:34.297163] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:50.266 [2024-12-12 20:20:34.297179] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:50.266 [2024-12-12 20:20:34.297193] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:50.266 20:20:34 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:50.266 20:20:34 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:50.266 [2024-12-12 20:20:34.316136] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:09:50.266 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:50.266 [2024-12-12 20:20:34.317191] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:50.266 [2024-12-12 20:20:34.317236] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:50.266 [2024-12-12 20:20:34.317255] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:50.266 [2024-12-12 20:20:34.317270] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:50.266 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:50.266 [2024-12-12 20:20:34.318944] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:50.266 [2024-12-12 20:20:34.318983] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:50.266 [2024-12-12 20:20:34.319000] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:50.266 [2024-12-12 20:20:34.319012] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:50.266 20:20:34 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:50.266 20:20:34 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:50.266 20:20:34 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:50.266 20:20:34 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:50.266 20:20:34 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:50.266 20:20:34 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:50.266 20:20:34 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:50.266 20:20:34 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:50.266 20:20:34 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:50.266 20:20:34 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:50.525 Attaching to 0000:00:10.0 00:09:50.525 Attached to 0000:00:10.0 00:09:50.525 20:20:34 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:50.525 20:20:34 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:50.525 20:20:34 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:09:50.525 Attaching to 0000:00:11.0 00:09:50.525 Attached to 0000:00:11.0 00:09:50.525 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:50.525 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:50.525 [2024-12-12 20:20:34.580311] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:10:02.793 20:20:46 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:02.793 20:20:46 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:02.793 20:20:46 sw_hotplug -- common/autotest_common.sh@719 -- # time=42.94 00:10:02.793 20:20:46 sw_hotplug -- common/autotest_common.sh@720 -- # echo 42.94 00:10:02.793 20:20:46 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:10:02.793 20:20:46 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.94 00:10:02.793 20:20:46 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.94 2 00:10:02.793 remove_attach_helper took 42.94s to complete (handling 2 nvme drive(s)) 20:20:46 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:10:09.359 20:20:52 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 68530 00:10:09.359 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (68530) - No such process 00:10:09.359 20:20:52 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 68530 00:10:09.359 20:20:52 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:10:09.359 20:20:52 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:10:09.359 20:20:52 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:10:09.359 20:20:52 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=69079 00:10:09.359 20:20:52 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:10:09.359 20:20:52 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 69079 00:10:09.359 20:20:52 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 69079 ']' 00:10:09.359 20:20:52 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:09.359 20:20:52 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:09.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:09.359 20:20:52 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:09.359 20:20:52 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:09.359 20:20:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:09.359 20:20:52 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:09.359 [2024-12-12 20:20:52.661753] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:10:09.359 [2024-12-12 20:20:52.661872] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69079 ] 00:10:09.359 [2024-12-12 20:20:52.821792] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:09.359 [2024-12-12 20:20:52.917771] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.359 20:20:53 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:09.359 20:20:53 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:10:09.359 20:20:53 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:10:09.359 20:20:53 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.359 20:20:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:09.359 20:20:53 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.359 20:20:53 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:10:09.359 20:20:53 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:09.359 20:20:53 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:10:09.359 20:20:53 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:10:09.359 20:20:53 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:10:09.359 20:20:53 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:10:09.359 20:20:53 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:10:09.359 20:20:53 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:10:09.359 20:20:53 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:09.359 20:20:53 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:09.359 20:20:53 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:10:09.359 20:20:53 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:09.359 20:20:53 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:15.936 20:20:59 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:15.936 20:20:59 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:15.936 20:20:59 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:15.936 20:20:59 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:15.936 20:20:59 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:15.936 20:20:59 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:15.936 20:20:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:15.936 20:20:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:15.936 20:20:59 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:15.936 20:20:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:15.936 20:20:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:15.936 20:20:59 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.936 20:20:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:15.936 20:20:59 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.936 20:20:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:15.936 20:20:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:15.936 [2024-12-12 20:20:59.600105] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:15.936 [2024-12-12 20:20:59.601285] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:15.936 [2024-12-12 20:20:59.601322] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:15.936 [2024-12-12 20:20:59.601335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:15.936 [2024-12-12 20:20:59.601352] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:15.936 [2024-12-12 20:20:59.601360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:15.936 [2024-12-12 20:20:59.601368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:15.936 [2024-12-12 20:20:59.601375] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:15.936 [2024-12-12 20:20:59.601383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:15.936 [2024-12-12 20:20:59.601390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:15.936 [2024-12-12 20:20:59.601400] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:15.936 [2024-12-12 20:20:59.601407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:15.936 [2024-12-12 20:20:59.601425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:15.936 [2024-12-12 20:21:00.000096] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:15.936 [2024-12-12 20:21:00.001290] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:15.936 [2024-12-12 20:21:00.001321] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:15.936 [2024-12-12 20:21:00.001332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:15.936 [2024-12-12 20:21:00.001345] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:15.936 [2024-12-12 20:21:00.001354] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:15.936 [2024-12-12 20:21:00.001361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:15.936 [2024-12-12 20:21:00.001369] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:15.936 [2024-12-12 20:21:00.001376] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:15.936 [2024-12-12 20:21:00.001383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:15.936 [2024-12-12 20:21:00.001391] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:15.936 [2024-12-12 20:21:00.001398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:15.936 [2024-12-12 20:21:00.001404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:15.936 20:21:00 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:15.936 20:21:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:15.936 20:21:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:15.936 20:21:00 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:15.936 20:21:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:15.936 20:21:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:15.936 20:21:00 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.936 20:21:00 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:15.936 20:21:00 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.936 20:21:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:15.936 20:21:00 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:16.194 20:21:00 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:16.194 20:21:00 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:16.194 20:21:00 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:16.194 20:21:00 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:16.194 20:21:00 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:16.194 20:21:00 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:16.194 20:21:00 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:16.194 20:21:00 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:16.194 20:21:00 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:16.194 20:21:00 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:16.194 20:21:00 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:28.431 20:21:12 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:10:28.431 20:21:12 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:10:28.431 20:21:12 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:10:28.431 20:21:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:28.431 20:21:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:28.431 20:21:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:28.431 20:21:12 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.431 20:21:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:28.431 20:21:12 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.431 20:21:12 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:28.431 20:21:12 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:28.431 20:21:12 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:28.431 20:21:12 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:28.431 [2024-12-12 20:21:12.400301] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:28.431 [2024-12-12 20:21:12.401723] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:28.431 [2024-12-12 20:21:12.401756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:28.431 [2024-12-12 20:21:12.401767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:28.431 [2024-12-12 20:21:12.401784] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:28.431 [2024-12-12 20:21:12.401791] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:28.431 [2024-12-12 20:21:12.401799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:28.431 [2024-12-12 20:21:12.401806] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:28.431 [2024-12-12 20:21:12.401814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:28.431 [2024-12-12 20:21:12.401820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:28.431 [2024-12-12 20:21:12.401828] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:28.431 [2024-12-12 20:21:12.401834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:28.431 [2024-12-12 20:21:12.401841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:28.431 20:21:12 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:28.431 20:21:12 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:28.431 20:21:12 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:28.431 20:21:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:28.431 20:21:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:28.431 20:21:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:28.431 20:21:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:28.431 20:21:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:28.431 20:21:12 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.431 20:21:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:28.431 20:21:12 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.431 20:21:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:10:28.431 20:21:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:28.692 [2024-12-12 20:21:12.900304] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:28.692 [2024-12-12 20:21:12.901456] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:28.692 [2024-12-12 20:21:12.901483] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:28.692 [2024-12-12 20:21:12.901495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:28.692 [2024-12-12 20:21:12.901507] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:28.692 [2024-12-12 20:21:12.901521] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:28.692 [2024-12-12 20:21:12.901528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:28.692 [2024-12-12 20:21:12.901536] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:28.692 [2024-12-12 20:21:12.901542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:28.692 [2024-12-12 20:21:12.901550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:28.692 [2024-12-12 20:21:12.901557] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:28.692 [2024-12-12 20:21:12.901565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:28.692 [2024-12-12 20:21:12.901571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:28.954 20:21:12 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:10:28.954 20:21:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:28.954 20:21:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:28.954 20:21:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:28.954 20:21:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:28.954 20:21:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:28.954 20:21:12 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.954 20:21:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:28.954 20:21:12 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.954 20:21:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:28.954 20:21:12 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:28.954 20:21:13 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:28.954 20:21:13 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:28.954 20:21:13 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:28.954 20:21:13 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:28.954 20:21:13 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:28.954 20:21:13 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:28.954 20:21:13 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:28.954 20:21:13 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:29.216 20:21:13 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:29.216 20:21:13 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:29.216 20:21:13 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:41.455 20:21:25 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:10:41.455 20:21:25 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:10:41.455 20:21:25 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:10:41.455 20:21:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:41.455 20:21:25 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:41.455 20:21:25 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.455 20:21:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:41.455 20:21:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:41.455 20:21:25 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.455 20:21:25 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:41.455 20:21:25 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:41.455 20:21:25 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:41.455 20:21:25 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:41.455 20:21:25 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:41.455 20:21:25 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:41.455 [2024-12-12 20:21:25.300518] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:41.455 [2024-12-12 20:21:25.301771] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:41.455 [2024-12-12 20:21:25.301802] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:41.455 [2024-12-12 20:21:25.301813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:41.455 [2024-12-12 20:21:25.301829] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:41.455 [2024-12-12 20:21:25.301837] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:41.455 [2024-12-12 20:21:25.301847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:41.455 [2024-12-12 20:21:25.301854] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:41.455 [2024-12-12 20:21:25.301862] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:41.455 [2024-12-12 20:21:25.301868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:41.455 [2024-12-12 20:21:25.301876] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:41.455 [2024-12-12 20:21:25.301882] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:41.455 [2024-12-12 20:21:25.301890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:41.455 20:21:25 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:41.455 20:21:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:41.455 20:21:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:41.455 20:21:25 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:41.455 20:21:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:41.455 20:21:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:41.455 20:21:25 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.455 20:21:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:41.455 20:21:25 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.455 20:21:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:10:41.455 20:21:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:41.716 [2024-12-12 20:21:25.700514] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:41.717 [2024-12-12 20:21:25.701662] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:41.717 [2024-12-12 20:21:25.701692] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:41.717 [2024-12-12 20:21:25.701703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:41.717 [2024-12-12 20:21:25.701714] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:41.717 [2024-12-12 20:21:25.701723] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:41.717 [2024-12-12 20:21:25.701730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:41.717 [2024-12-12 20:21:25.701739] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:41.717 [2024-12-12 20:21:25.701746] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:41.717 [2024-12-12 20:21:25.701755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:41.717 [2024-12-12 20:21:25.701762] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:41.717 [2024-12-12 20:21:25.701769] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:41.717 [2024-12-12 20:21:25.701776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:41.717 20:21:25 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:10:41.717 20:21:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:41.717 20:21:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:41.717 20:21:25 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:41.717 20:21:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:41.717 20:21:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:41.717 20:21:25 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.717 20:21:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:41.717 20:21:25 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.717 20:21:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:41.717 20:21:25 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:41.978 20:21:25 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:41.978 20:21:25 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:41.978 20:21:25 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:41.978 20:21:26 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:41.978 20:21:26 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:41.978 20:21:26 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:41.978 20:21:26 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:41.978 20:21:26 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:41.978 20:21:26 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:41.978 20:21:26 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:41.978 20:21:26 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:54.196 20:21:38 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:10:54.196 20:21:38 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:10:54.196 20:21:38 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:10:54.196 20:21:38 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:54.196 20:21:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:54.196 20:21:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:54.196 20:21:38 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.196 20:21:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:54.196 20:21:38 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.196 20:21:38 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:54.196 20:21:38 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:54.196 20:21:38 sw_hotplug -- common/autotest_common.sh@719 -- # time=44.63 00:10:54.196 20:21:38 sw_hotplug -- common/autotest_common.sh@720 -- # echo 44.63 00:10:54.196 20:21:38 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:10:54.196 20:21:38 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.63 00:10:54.196 20:21:38 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.63 2 00:10:54.196 remove_attach_helper took 44.63s to complete (handling 2 nvme drive(s)) 20:21:38 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:10:54.196 20:21:38 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.196 20:21:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:54.196 20:21:38 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.196 20:21:38 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:10:54.196 20:21:38 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.196 20:21:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:54.196 20:21:38 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.196 20:21:38 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:10:54.196 20:21:38 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:54.196 20:21:38 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:10:54.196 20:21:38 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:10:54.196 20:21:38 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:10:54.196 20:21:38 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:10:54.196 20:21:38 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:10:54.196 20:21:38 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:10:54.196 20:21:38 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:54.196 20:21:38 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:54.196 20:21:38 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:10:54.196 20:21:38 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:54.196 20:21:38 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:00.773 20:21:44 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:00.773 20:21:44 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:00.773 20:21:44 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:00.773 20:21:44 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:00.773 20:21:44 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:00.773 20:21:44 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:00.773 20:21:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:00.773 20:21:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:00.773 20:21:44 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:00.773 20:21:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:00.773 20:21:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:00.773 20:21:44 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.773 20:21:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:00.773 20:21:44 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.773 20:21:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:00.773 20:21:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:00.773 [2024-12-12 20:21:44.262512] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:00.773 [2024-12-12 20:21:44.263553] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.773 [2024-12-12 20:21:44.263592] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:00.773 [2024-12-12 20:21:44.263604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:00.773 [2024-12-12 20:21:44.263622] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.773 [2024-12-12 20:21:44.263629] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:00.773 [2024-12-12 20:21:44.263638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:00.773 [2024-12-12 20:21:44.263645] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.773 [2024-12-12 20:21:44.263653] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:00.773 [2024-12-12 20:21:44.263659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:00.773 [2024-12-12 20:21:44.263668] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.773 [2024-12-12 20:21:44.263674] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:00.773 [2024-12-12 20:21:44.263686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:00.773 [2024-12-12 20:21:44.662506] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:00.773 [2024-12-12 20:21:44.663552] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.773 [2024-12-12 20:21:44.663586] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:00.773 [2024-12-12 20:21:44.663598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:00.773 [2024-12-12 20:21:44.663615] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.773 [2024-12-12 20:21:44.663624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:00.773 [2024-12-12 20:21:44.663631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:00.773 [2024-12-12 20:21:44.663640] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.773 [2024-12-12 20:21:44.663646] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:00.773 [2024-12-12 20:21:44.663656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:00.773 [2024-12-12 20:21:44.663663] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.774 [2024-12-12 20:21:44.663670] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:00.774 [2024-12-12 20:21:44.663677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:00.774 20:21:44 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:00.774 20:21:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:00.774 20:21:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:00.774 20:21:44 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:00.774 20:21:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:00.774 20:21:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:00.774 20:21:44 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.774 20:21:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:00.774 20:21:44 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.774 20:21:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:00.774 20:21:44 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:00.774 20:21:44 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:00.774 20:21:44 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:00.774 20:21:44 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:00.774 20:21:44 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:00.774 20:21:44 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:00.774 20:21:44 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:00.774 20:21:44 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:00.774 20:21:44 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:01.037 20:21:45 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:01.037 20:21:45 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:01.037 20:21:45 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:13.245 20:21:57 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:13.245 20:21:57 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:13.245 20:21:57 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:13.245 20:21:57 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:13.245 20:21:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:13.245 20:21:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:13.245 20:21:57 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.245 20:21:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:13.245 20:21:57 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.245 20:21:57 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:13.245 20:21:57 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:13.245 20:21:57 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:13.245 20:21:57 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:13.245 20:21:57 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:13.245 20:21:57 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:13.245 20:21:57 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:13.245 20:21:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:13.245 20:21:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:13.245 20:21:57 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:13.245 20:21:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:13.245 20:21:57 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.245 20:21:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:13.245 20:21:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:13.245 20:21:57 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.245 20:21:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:13.245 20:21:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:13.245 [2024-12-12 20:21:57.162783] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:13.245 [2024-12-12 20:21:57.163831] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:13.245 [2024-12-12 20:21:57.163870] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:13.245 [2024-12-12 20:21:57.163881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:13.245 [2024-12-12 20:21:57.163899] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:13.245 [2024-12-12 20:21:57.163907] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:13.245 [2024-12-12 20:21:57.163915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:13.245 [2024-12-12 20:21:57.163923] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:13.245 [2024-12-12 20:21:57.163931] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:13.245 [2024-12-12 20:21:57.163938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:13.245 [2024-12-12 20:21:57.163947] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:13.245 [2024-12-12 20:21:57.163953] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:13.245 [2024-12-12 20:21:57.163962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:13.505 20:21:57 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:13.505 20:21:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:13.505 20:21:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:13.505 20:21:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:13.505 20:21:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:13.505 20:21:57 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.505 20:21:57 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:13.505 20:21:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:13.505 20:21:57 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.505 [2024-12-12 20:21:57.662785] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:13.505 [2024-12-12 20:21:57.663803] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:13.505 [2024-12-12 20:21:57.663833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:13.505 [2024-12-12 20:21:57.663845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:13.505 [2024-12-12 20:21:57.663861] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:13.505 [2024-12-12 20:21:57.663872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:13.505 [2024-12-12 20:21:57.663879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:13.505 [2024-12-12 20:21:57.663888] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:13.505 [2024-12-12 20:21:57.663895] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:13.505 [2024-12-12 20:21:57.663903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:13.505 [2024-12-12 20:21:57.663910] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:13.505 [2024-12-12 20:21:57.663917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:13.505 [2024-12-12 20:21:57.663924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:13.505 20:21:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:13.505 20:21:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:14.076 20:21:58 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:14.076 20:21:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:14.076 20:21:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:14.076 20:21:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:14.076 20:21:58 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:14.076 20:21:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:14.076 20:21:58 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.076 20:21:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:14.076 20:21:58 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.076 20:21:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:14.076 20:21:58 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:14.076 20:21:58 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:14.076 20:21:58 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:14.076 20:21:58 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:14.335 20:21:58 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:14.335 20:21:58 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:14.335 20:21:58 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:14.335 20:21:58 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:14.335 20:21:58 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:14.335 20:21:58 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:14.335 20:21:58 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:14.335 20:21:58 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:26.556 20:22:10 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:26.556 20:22:10 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:26.556 20:22:10 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:26.556 20:22:10 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:26.556 20:22:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:26.556 20:22:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:26.556 20:22:10 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.556 20:22:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:26.556 20:22:10 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.556 20:22:10 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:26.556 20:22:10 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:26.556 20:22:10 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:26.556 20:22:10 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:26.556 20:22:10 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:26.556 20:22:10 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:26.556 20:22:10 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:26.556 20:22:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:26.556 20:22:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:26.556 20:22:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:26.556 20:22:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:26.556 20:22:10 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.556 20:22:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:26.556 20:22:10 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:26.556 20:22:10 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.556 20:22:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:26.556 20:22:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:26.556 [2024-12-12 20:22:10.563049] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:26.556 [2024-12-12 20:22:10.564617] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.556 [2024-12-12 20:22:10.564676] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:26.556 [2024-12-12 20:22:10.564690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:26.556 [2024-12-12 20:22:10.564721] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.556 [2024-12-12 20:22:10.564732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:26.556 [2024-12-12 20:22:10.564744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:26.556 [2024-12-12 20:22:10.564754] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.556 [2024-12-12 20:22:10.564768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:26.556 [2024-12-12 20:22:10.564777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:26.556 [2024-12-12 20:22:10.564789] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.556 [2024-12-12 20:22:10.564798] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:26.556 [2024-12-12 20:22:10.564809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:26.817 [2024-12-12 20:22:10.963049] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:26.817 [2024-12-12 20:22:10.964607] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.817 [2024-12-12 20:22:10.964660] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:26.817 [2024-12-12 20:22:10.964678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:26.817 [2024-12-12 20:22:10.964704] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.817 [2024-12-12 20:22:10.964716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:26.817 [2024-12-12 20:22:10.964724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:26.817 [2024-12-12 20:22:10.964738] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.817 [2024-12-12 20:22:10.964747] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:26.817 [2024-12-12 20:22:10.964758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:26.817 [2024-12-12 20:22:10.964767] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.817 [2024-12-12 20:22:10.964781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:26.817 [2024-12-12 20:22:10.964789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:27.078 20:22:11 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:27.078 20:22:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:27.078 20:22:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:27.078 20:22:11 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:27.078 20:22:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:27.078 20:22:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:27.078 20:22:11 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.078 20:22:11 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:27.078 20:22:11 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.078 20:22:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:27.078 20:22:11 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:27.078 20:22:11 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:27.078 20:22:11 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:27.078 20:22:11 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:27.078 20:22:11 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:27.339 20:22:11 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:27.339 20:22:11 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:27.339 20:22:11 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:27.339 20:22:11 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:27.339 20:22:11 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:27.339 20:22:11 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:27.339 20:22:11 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:39.571 20:22:23 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:39.571 20:22:23 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:39.571 20:22:23 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:39.571 20:22:23 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:39.571 20:22:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:39.571 20:22:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:39.571 20:22:23 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.571 20:22:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:39.571 20:22:23 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.571 20:22:23 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:39.571 20:22:23 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:39.571 20:22:23 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.25 00:11:39.571 20:22:23 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.25 00:11:39.571 20:22:23 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:11:39.571 20:22:23 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.25 00:11:39.571 20:22:23 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.25 2 00:11:39.571 remove_attach_helper took 45.25s to complete (handling 2 nvme drive(s)) 20:22:23 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:11:39.571 20:22:23 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 69079 00:11:39.571 20:22:23 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 69079 ']' 00:11:39.571 20:22:23 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 69079 00:11:39.571 20:22:23 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:11:39.571 20:22:23 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:39.571 20:22:23 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69079 00:11:39.571 20:22:23 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:39.571 killing process with pid 69079 00:11:39.571 20:22:23 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:39.571 20:22:23 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69079' 00:11:39.571 20:22:23 sw_hotplug -- common/autotest_common.sh@973 -- # kill 69079 00:11:39.571 20:22:23 sw_hotplug -- common/autotest_common.sh@978 -- # wait 69079 00:11:40.955 20:22:24 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:41.216 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:41.788 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:41.788 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:41.788 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:41.788 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:41.788 00:11:41.788 real 2m29.493s 00:11:41.788 user 1m51.333s 00:11:41.788 sys 0m16.867s 00:11:41.788 ************************************ 00:11:41.788 END TEST sw_hotplug 00:11:41.788 ************************************ 00:11:41.788 20:22:25 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:41.788 20:22:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:41.788 20:22:25 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:11:41.788 20:22:25 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:11:41.788 20:22:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:41.788 20:22:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:41.788 20:22:25 -- common/autotest_common.sh@10 -- # set +x 00:11:41.788 ************************************ 00:11:41.788 START TEST nvme_xnvme 00:11:41.788 ************************************ 00:11:41.788 20:22:25 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:11:42.051 * Looking for test storage... 00:11:42.051 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:42.051 20:22:26 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:42.051 20:22:26 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:11:42.051 20:22:26 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:42.051 20:22:26 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:42.051 20:22:26 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:42.051 20:22:26 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:42.051 20:22:26 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:42.051 20:22:26 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:11:42.051 20:22:26 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:11:42.051 20:22:26 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:11:42.051 20:22:26 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:11:42.051 20:22:26 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:11:42.051 20:22:26 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:11:42.051 20:22:26 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:11:42.051 20:22:26 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:42.051 20:22:26 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:11:42.051 20:22:26 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:11:42.051 20:22:26 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:42.051 20:22:26 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:42.051 20:22:26 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:11:42.051 20:22:26 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:11:42.051 20:22:26 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:42.051 20:22:26 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:11:42.051 20:22:26 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:11:42.051 20:22:26 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:11:42.051 20:22:26 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:11:42.051 20:22:26 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:42.051 20:22:26 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:11:42.051 20:22:26 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:11:42.051 20:22:26 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:42.051 20:22:26 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:42.051 20:22:26 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:11:42.051 20:22:26 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:42.051 20:22:26 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:42.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.051 --rc genhtml_branch_coverage=1 00:11:42.051 --rc genhtml_function_coverage=1 00:11:42.051 --rc genhtml_legend=1 00:11:42.051 --rc geninfo_all_blocks=1 00:11:42.051 --rc geninfo_unexecuted_blocks=1 00:11:42.051 00:11:42.051 ' 00:11:42.051 20:22:26 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:42.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.051 --rc genhtml_branch_coverage=1 00:11:42.051 --rc genhtml_function_coverage=1 00:11:42.051 --rc genhtml_legend=1 00:11:42.051 --rc geninfo_all_blocks=1 00:11:42.051 --rc geninfo_unexecuted_blocks=1 00:11:42.051 00:11:42.051 ' 00:11:42.051 20:22:26 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:42.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.051 --rc genhtml_branch_coverage=1 00:11:42.051 --rc genhtml_function_coverage=1 00:11:42.051 --rc genhtml_legend=1 00:11:42.051 --rc geninfo_all_blocks=1 00:11:42.051 --rc geninfo_unexecuted_blocks=1 00:11:42.051 00:11:42.051 ' 00:11:42.051 20:22:26 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:42.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.051 --rc genhtml_branch_coverage=1 00:11:42.051 --rc genhtml_function_coverage=1 00:11:42.051 --rc genhtml_legend=1 00:11:42.051 --rc geninfo_all_blocks=1 00:11:42.051 --rc geninfo_unexecuted_blocks=1 00:11:42.051 00:11:42.051 ' 00:11:42.051 20:22:26 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:11:42.051 20:22:26 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:11:42.051 20:22:26 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:42.051 20:22:26 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:11:42.051 20:22:26 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:42.051 20:22:26 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:42.051 20:22:26 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:42.051 20:22:26 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:11:42.051 20:22:26 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:11:42.051 20:22:26 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:11:42.051 20:22:26 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:42.051 20:22:26 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:11:42.051 20:22:26 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:42.051 20:22:26 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:42.051 20:22:26 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:42.051 20:22:26 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:42.051 20:22:26 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:42.051 20:22:26 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:42.051 20:22:26 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:42.051 20:22:26 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:42.051 20:22:26 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:42.051 20:22:26 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:42.051 20:22:26 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:42.051 20:22:26 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:42.051 20:22:26 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:42.051 20:22:26 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:42.051 20:22:26 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:11:42.051 20:22:26 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:11:42.051 20:22:26 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:42.051 20:22:26 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:11:42.051 20:22:26 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:11:42.051 20:22:26 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:11:42.051 20:22:26 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:11:42.051 20:22:26 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:42.051 20:22:26 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:11:42.051 20:22:26 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:11:42.051 20:22:26 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:11:42.051 20:22:26 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:42.051 20:22:26 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:42.051 20:22:26 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:11:42.051 20:22:26 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:11:42.051 20:22:26 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:11:42.051 20:22:26 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:11:42.051 20:22:26 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:11:42.051 20:22:26 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:11:42.051 20:22:26 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:11:42.051 20:22:26 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:11:42.051 20:22:26 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:11:42.051 20:22:26 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:11:42.051 20:22:26 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:11:42.051 20:22:26 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:11:42.051 20:22:26 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:11:42.051 20:22:26 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:11:42.052 20:22:26 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:11:42.052 20:22:26 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:11:42.052 20:22:26 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:11:42.052 20:22:26 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:42.052 20:22:26 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:11:42.052 20:22:26 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:11:42.052 20:22:26 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:11:42.052 20:22:26 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:11:42.052 20:22:26 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:42.052 20:22:26 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:11:42.052 20:22:26 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:42.052 20:22:26 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:11:42.052 20:22:26 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:11:42.052 20:22:26 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:11:42.052 20:22:26 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:11:42.052 20:22:26 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:11:42.052 20:22:26 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:11:42.052 20:22:26 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:11:42.052 20:22:26 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:11:42.052 20:22:26 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:11:42.052 20:22:26 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:42.052 20:22:26 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:11:42.052 20:22:26 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:11:42.052 20:22:26 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:11:42.052 20:22:26 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:11:42.052 20:22:26 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:11:42.052 20:22:26 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:11:42.052 20:22:26 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:11:42.052 20:22:26 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:11:42.052 20:22:26 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:11:42.052 20:22:26 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:11:42.052 20:22:26 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:42.052 20:22:26 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:11:42.052 20:22:26 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:11:42.052 20:22:26 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:11:42.052 20:22:26 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:11:42.052 20:22:26 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:11:42.052 20:22:26 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:11:42.052 20:22:26 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:11:42.052 20:22:26 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:11:42.052 20:22:26 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:11:42.052 20:22:26 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:11:42.052 20:22:26 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:11:42.052 20:22:26 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:42.052 20:22:26 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:11:42.052 20:22:26 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:11:42.052 20:22:26 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:11:42.052 20:22:26 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:11:42.052 20:22:26 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:11:42.052 20:22:26 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:11:42.052 20:22:26 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:11:42.052 20:22:26 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:11:42.052 20:22:26 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:11:42.052 20:22:26 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:11:42.052 20:22:26 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:11:42.052 20:22:26 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:42.052 20:22:26 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:42.052 20:22:26 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:42.052 20:22:26 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:42.052 20:22:26 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:42.052 20:22:26 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:42.052 20:22:26 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:11:42.052 20:22:26 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:42.052 #define SPDK_CONFIG_H 00:11:42.052 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:42.052 #define SPDK_CONFIG_APPS 1 00:11:42.052 #define SPDK_CONFIG_ARCH native 00:11:42.052 #define SPDK_CONFIG_ASAN 1 00:11:42.052 #undef SPDK_CONFIG_AVAHI 00:11:42.052 #undef SPDK_CONFIG_CET 00:11:42.052 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:42.052 #define SPDK_CONFIG_COVERAGE 1 00:11:42.052 #define SPDK_CONFIG_CROSS_PREFIX 00:11:42.052 #undef SPDK_CONFIG_CRYPTO 00:11:42.052 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:42.052 #undef SPDK_CONFIG_CUSTOMOCF 00:11:42.052 #undef SPDK_CONFIG_DAOS 00:11:42.052 #define SPDK_CONFIG_DAOS_DIR 00:11:42.052 #define SPDK_CONFIG_DEBUG 1 00:11:42.052 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:42.052 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:11:42.052 #define SPDK_CONFIG_DPDK_INC_DIR 00:11:42.052 #define SPDK_CONFIG_DPDK_LIB_DIR 00:11:42.052 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:42.052 #undef SPDK_CONFIG_DPDK_UADK 00:11:42.052 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:11:42.052 #define SPDK_CONFIG_EXAMPLES 1 00:11:42.052 #undef SPDK_CONFIG_FC 00:11:42.052 #define SPDK_CONFIG_FC_PATH 00:11:42.052 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:42.052 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:42.052 #define SPDK_CONFIG_FSDEV 1 00:11:42.052 #undef SPDK_CONFIG_FUSE 00:11:42.052 #undef SPDK_CONFIG_FUZZER 00:11:42.052 #define SPDK_CONFIG_FUZZER_LIB 00:11:42.052 #undef SPDK_CONFIG_GOLANG 00:11:42.052 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:42.052 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:42.052 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:42.052 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:42.052 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:42.052 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:42.052 #undef SPDK_CONFIG_HAVE_LZ4 00:11:42.052 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:42.052 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:42.052 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:42.052 #define SPDK_CONFIG_IDXD 1 00:11:42.052 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:42.052 #undef SPDK_CONFIG_IPSEC_MB 00:11:42.052 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:42.052 #define SPDK_CONFIG_ISAL 1 00:11:42.052 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:42.052 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:42.052 #define SPDK_CONFIG_LIBDIR 00:11:42.052 #undef SPDK_CONFIG_LTO 00:11:42.052 #define SPDK_CONFIG_MAX_LCORES 128 00:11:42.052 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:11:42.052 #define SPDK_CONFIG_NVME_CUSE 1 00:11:42.052 #undef SPDK_CONFIG_OCF 00:11:42.052 #define SPDK_CONFIG_OCF_PATH 00:11:42.052 #define SPDK_CONFIG_OPENSSL_PATH 00:11:42.052 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:42.052 #define SPDK_CONFIG_PGO_DIR 00:11:42.052 #undef SPDK_CONFIG_PGO_USE 00:11:42.052 #define SPDK_CONFIG_PREFIX /usr/local 00:11:42.052 #undef SPDK_CONFIG_RAID5F 00:11:42.052 #undef SPDK_CONFIG_RBD 00:11:42.052 #define SPDK_CONFIG_RDMA 1 00:11:42.052 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:42.052 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:42.052 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:42.052 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:42.052 #define SPDK_CONFIG_SHARED 1 00:11:42.052 #undef SPDK_CONFIG_SMA 00:11:42.052 #define SPDK_CONFIG_TESTS 1 00:11:42.052 #undef SPDK_CONFIG_TSAN 00:11:42.052 #define SPDK_CONFIG_UBLK 1 00:11:42.052 #define SPDK_CONFIG_UBSAN 1 00:11:42.052 #undef SPDK_CONFIG_UNIT_TESTS 00:11:42.052 #undef SPDK_CONFIG_URING 00:11:42.052 #define SPDK_CONFIG_URING_PATH 00:11:42.052 #undef SPDK_CONFIG_URING_ZNS 00:11:42.052 #undef SPDK_CONFIG_USDT 00:11:42.052 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:42.052 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:42.052 #undef SPDK_CONFIG_VFIO_USER 00:11:42.052 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:42.052 #define SPDK_CONFIG_VHOST 1 00:11:42.052 #define SPDK_CONFIG_VIRTIO 1 00:11:42.052 #undef SPDK_CONFIG_VTUNE 00:11:42.052 #define SPDK_CONFIG_VTUNE_DIR 00:11:42.052 #define SPDK_CONFIG_WERROR 1 00:11:42.052 #define SPDK_CONFIG_WPDK_DIR 00:11:42.052 #define SPDK_CONFIG_XNVME 1 00:11:42.052 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:42.052 20:22:26 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:42.052 20:22:26 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:42.052 20:22:26 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:11:42.052 20:22:26 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:42.052 20:22:26 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:42.052 20:22:26 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:42.052 20:22:26 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.052 20:22:26 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.052 20:22:26 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.053 20:22:26 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:11:42.053 20:22:26 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:11:42.053 20:22:26 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:11:42.053 20:22:26 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:11:42.053 20:22:26 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:11:42.053 20:22:26 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:11:42.053 20:22:26 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:11:42.053 20:22:26 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:11:42.053 20:22:26 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:11:42.053 20:22:26 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:11:42.053 20:22:26 nvme_xnvme -- pm/common@68 -- # uname -s 00:11:42.053 20:22:26 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:11:42.053 20:22:26 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:42.053 20:22:26 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:42.053 20:22:26 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:42.053 20:22:26 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:42.053 20:22:26 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:42.053 20:22:26 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:42.053 20:22:26 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:11:42.053 20:22:26 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:42.053 20:22:26 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:42.053 20:22:26 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:42.053 20:22:26 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:42.053 20:22:26 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:11:42.053 20:22:26 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@58 -- # : 0 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@126 -- # : 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@140 -- # : 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:11:42.053 20:22:26 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 70435 ]] 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 70435 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.QHhZXZ 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.QHhZXZ/tests/xnvme /tmp/spdk.QHhZXZ 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13971759104 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5596356608 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6260625408 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265389056 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493362176 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506158080 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:42.054 20:22:26 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:11:42.055 20:22:26 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:11:42.055 20:22:26 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13971759104 00:11:42.055 20:22:26 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:11:42.055 20:22:26 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5596356608 00:11:42.055 20:22:26 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:42.055 20:22:26 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:11:42.055 20:22:26 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:11:42.055 20:22:26 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:11:42.055 20:22:26 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:11:42.055 20:22:26 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:11:42.055 20:22:26 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:42.055 20:22:26 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:42.055 20:22:26 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:42.055 20:22:26 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6265241600 00:11:42.055 20:22:26 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265393152 00:11:42.055 20:22:26 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=151552 00:11:42.055 20:22:26 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:42.055 20:22:26 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:11:42.055 20:22:26 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:11:42.055 20:22:26 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:11:42.055 20:22:26 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:11:42.055 20:22:26 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:11:42.055 20:22:26 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:42.055 20:22:26 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:42.055 20:22:26 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:42.055 20:22:26 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253064704 00:11:42.055 20:22:26 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253076992 00:11:42.055 20:22:26 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:11:42.055 20:22:26 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:42.055 20:22:26 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output 00:11:42.055 20:22:26 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:11:42.055 20:22:26 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=95130918912 00:11:42.055 20:22:26 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:11:42.055 20:22:26 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4571860992 00:11:42.055 20:22:26 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:42.055 20:22:26 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:11:42.055 * Looking for test storage... 00:11:42.055 20:22:26 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:11:42.055 20:22:26 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:11:42.055 20:22:26 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:42.055 20:22:26 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:42.055 20:22:26 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:11:42.055 20:22:26 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13971759104 00:11:42.055 20:22:26 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:11:42.055 20:22:26 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:11:42.055 20:22:26 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:11:42.055 20:22:26 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:11:42.055 20:22:26 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:11:42.055 20:22:26 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:42.055 20:22:26 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:42.055 20:22:26 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:42.055 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:42.055 20:22:26 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:11:42.055 20:22:26 nvme_xnvme -- common/autotest_common.sh@1698 -- # set -o errtrace 00:11:42.055 20:22:26 nvme_xnvme -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:11:42.055 20:22:26 nvme_xnvme -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:42.055 20:22:26 nvme_xnvme -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:42.055 20:22:26 nvme_xnvme -- common/autotest_common.sh@1703 -- # true 00:11:42.055 20:22:26 nvme_xnvme -- common/autotest_common.sh@1705 -- # xtrace_fd 00:11:42.055 20:22:26 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:11:42.055 20:22:26 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:11:42.055 20:22:26 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:11:42.055 20:22:26 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:11:42.055 20:22:26 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:42.055 20:22:26 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:42.055 20:22:26 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:42.055 20:22:26 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:11:42.055 20:22:26 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:42.055 20:22:26 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:11:42.055 20:22:26 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:42.348 20:22:26 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:42.348 20:22:26 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:42.348 20:22:26 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:42.348 20:22:26 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:42.348 20:22:26 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:11:42.348 20:22:26 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:11:42.348 20:22:26 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:11:42.348 20:22:26 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:11:42.348 20:22:26 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:11:42.348 20:22:26 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:11:42.348 20:22:26 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:11:42.348 20:22:26 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:42.348 20:22:26 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:11:42.348 20:22:26 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:11:42.348 20:22:26 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:42.348 20:22:26 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:42.348 20:22:26 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:11:42.348 20:22:26 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:11:42.348 20:22:26 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:42.348 20:22:26 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:11:42.348 20:22:26 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:11:42.348 20:22:26 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:11:42.348 20:22:26 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:11:42.348 20:22:26 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:42.348 20:22:26 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:11:42.348 20:22:26 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:11:42.349 20:22:26 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:42.349 20:22:26 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:42.349 20:22:26 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:11:42.349 20:22:26 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:42.349 20:22:26 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:42.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.349 --rc genhtml_branch_coverage=1 00:11:42.349 --rc genhtml_function_coverage=1 00:11:42.349 --rc genhtml_legend=1 00:11:42.349 --rc geninfo_all_blocks=1 00:11:42.349 --rc geninfo_unexecuted_blocks=1 00:11:42.349 00:11:42.349 ' 00:11:42.349 20:22:26 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:42.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.349 --rc genhtml_branch_coverage=1 00:11:42.349 --rc genhtml_function_coverage=1 00:11:42.349 --rc genhtml_legend=1 00:11:42.349 --rc geninfo_all_blocks=1 00:11:42.349 --rc geninfo_unexecuted_blocks=1 00:11:42.349 00:11:42.349 ' 00:11:42.349 20:22:26 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:42.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.349 --rc genhtml_branch_coverage=1 00:11:42.349 --rc genhtml_function_coverage=1 00:11:42.349 --rc genhtml_legend=1 00:11:42.349 --rc geninfo_all_blocks=1 00:11:42.349 --rc geninfo_unexecuted_blocks=1 00:11:42.349 00:11:42.349 ' 00:11:42.349 20:22:26 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:42.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.349 --rc genhtml_branch_coverage=1 00:11:42.349 --rc genhtml_function_coverage=1 00:11:42.349 --rc genhtml_legend=1 00:11:42.349 --rc geninfo_all_blocks=1 00:11:42.349 --rc geninfo_unexecuted_blocks=1 00:11:42.349 00:11:42.349 ' 00:11:42.349 20:22:26 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:42.349 20:22:26 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:11:42.349 20:22:26 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:42.349 20:22:26 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:42.349 20:22:26 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:42.349 20:22:26 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.349 20:22:26 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.349 20:22:26 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.349 20:22:26 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:11:42.349 20:22:26 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.349 20:22:26 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:11:42.349 20:22:26 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:11:42.349 20:22:26 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:11:42.349 20:22:26 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:11:42.349 20:22:26 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:11:42.349 20:22:26 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:11:42.349 20:22:26 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:11:42.349 20:22:26 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:11:42.349 20:22:26 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:11:42.349 20:22:26 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:11:42.349 20:22:26 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:11:42.349 20:22:26 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:11:42.349 20:22:26 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:11:42.349 20:22:26 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:11:42.349 20:22:26 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:11:42.349 20:22:26 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:11:42.349 20:22:26 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:11:42.349 20:22:26 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:11:42.349 20:22:26 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:11:42.349 20:22:26 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:11:42.349 20:22:26 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:11:42.349 20:22:26 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:42.631 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:42.631 Waiting for block devices as requested 00:11:42.631 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:42.892 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:42.892 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:42.892 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:48.238 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:48.238 20:22:32 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:11:48.498 20:22:32 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:11:48.498 20:22:32 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:11:48.759 20:22:32 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:11:48.759 20:22:32 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:11:48.759 20:22:32 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:11:48.759 20:22:32 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:11:48.759 20:22:32 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:11:48.759 No valid GPT data, bailing 00:11:48.759 20:22:32 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:11:48.759 20:22:32 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:11:48.759 20:22:32 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:11:48.759 20:22:32 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:11:48.759 20:22:32 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:11:48.759 20:22:32 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:11:48.759 20:22:32 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:11:48.759 20:22:32 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:11:48.759 20:22:32 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:11:48.759 20:22:32 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:11:48.759 20:22:32 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:11:48.759 20:22:32 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:11:48.759 20:22:32 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:11:48.759 20:22:32 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:11:48.759 20:22:32 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:11:48.759 20:22:32 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:11:48.759 20:22:32 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:11:48.759 20:22:32 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:48.759 20:22:32 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:48.759 20:22:32 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:11:48.759 ************************************ 00:11:48.759 START TEST xnvme_rpc 00:11:48.759 ************************************ 00:11:48.759 20:22:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:11:48.759 20:22:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:11:48.760 20:22:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:11:48.760 20:22:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:11:48.760 20:22:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:11:48.760 20:22:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70830 00:11:48.760 20:22:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70830 00:11:48.760 20:22:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70830 ']' 00:11:48.760 20:22:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:48.760 20:22:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:48.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:48.760 20:22:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:48.760 20:22:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:48.760 20:22:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:48.760 20:22:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:48.760 [2024-12-12 20:22:32.908015] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:11:48.760 [2024-12-12 20:22:32.908158] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70830 ] 00:11:49.020 [2024-12-12 20:22:33.071333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:49.020 [2024-12-12 20:22:33.212512] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:49.962 20:22:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:49.962 20:22:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:49.962 20:22:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:11:49.962 20:22:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.962 20:22:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.962 xnvme_bdev 00:11:49.962 20:22:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.962 20:22:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:11:49.962 20:22:33 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:11:49.962 20:22:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.962 20:22:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.962 20:22:33 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:11:49.962 20:22:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.962 20:22:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:11:49.962 20:22:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:11:49.962 20:22:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:11:49.962 20:22:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.962 20:22:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.962 20:22:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:11:49.962 20:22:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.962 20:22:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:11:49.962 20:22:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:11:49.962 20:22:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:11:49.962 20:22:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:11:49.962 20:22:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.962 20:22:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.962 20:22:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.962 20:22:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:11:49.962 20:22:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:11:49.962 20:22:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:11:49.962 20:22:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.962 20:22:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.962 20:22:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:11:49.962 20:22:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.962 20:22:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:11:49.962 20:22:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:11:49.962 20:22:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.962 20:22:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.962 20:22:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.962 20:22:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70830 00:11:49.962 20:22:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70830 ']' 00:11:49.962 20:22:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70830 00:11:49.962 20:22:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:11:49.962 20:22:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:49.962 20:22:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70830 00:11:49.962 20:22:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:49.962 20:22:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:49.962 killing process with pid 70830 00:11:49.962 20:22:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70830' 00:11:49.962 20:22:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70830 00:11:49.962 20:22:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70830 00:11:51.877 00:11:51.877 real 0m3.112s 00:11:51.877 user 0m3.052s 00:11:51.877 sys 0m0.519s 00:11:51.877 20:22:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:51.877 20:22:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:51.877 ************************************ 00:11:51.877 END TEST xnvme_rpc 00:11:51.877 ************************************ 00:11:51.877 20:22:35 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:11:51.877 20:22:35 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:51.877 20:22:35 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:51.877 20:22:35 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:11:51.877 ************************************ 00:11:51.877 START TEST xnvme_bdevperf 00:11:51.877 ************************************ 00:11:51.877 20:22:36 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:11:51.877 20:22:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:11:51.877 20:22:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:11:51.877 20:22:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:11:51.877 20:22:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:11:51.877 20:22:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:11:51.877 20:22:36 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:11:51.877 20:22:36 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:11:51.877 { 00:11:51.877 "subsystems": [ 00:11:51.877 { 00:11:51.877 "subsystem": "bdev", 00:11:51.877 "config": [ 00:11:51.877 { 00:11:51.877 "params": { 00:11:51.877 "io_mechanism": "libaio", 00:11:51.877 "conserve_cpu": false, 00:11:51.877 "filename": "/dev/nvme0n1", 00:11:51.877 "name": "xnvme_bdev" 00:11:51.877 }, 00:11:51.877 "method": "bdev_xnvme_create" 00:11:51.877 }, 00:11:51.877 { 00:11:51.877 "method": "bdev_wait_for_examine" 00:11:51.877 } 00:11:51.877 ] 00:11:51.877 } 00:11:51.877 ] 00:11:51.877 } 00:11:51.877 [2024-12-12 20:22:36.094565] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:11:51.877 [2024-12-12 20:22:36.094734] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70905 ] 00:11:52.139 [2024-12-12 20:22:36.254563] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:52.139 [2024-12-12 20:22:36.356614] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:52.400 Running I/O for 5 seconds... 00:11:54.745 29448.00 IOPS, 115.03 MiB/s [2024-12-12T20:22:39.909Z] 31024.50 IOPS, 121.19 MiB/s [2024-12-12T20:22:40.843Z] 31231.33 IOPS, 122.00 MiB/s [2024-12-12T20:22:41.774Z] 32299.25 IOPS, 126.17 MiB/s 00:11:57.546 Latency(us) 00:11:57.546 [2024-12-12T20:22:41.774Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:57.546 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:11:57.546 xnvme_bdev : 5.00 32903.53 128.53 0.00 0.00 1940.50 182.74 7057.72 00:11:57.546 [2024-12-12T20:22:41.774Z] =================================================================================================================== 00:11:57.546 [2024-12-12T20:22:41.774Z] Total : 32903.53 128.53 0.00 0.00 1940.50 182.74 7057.72 00:11:58.479 20:22:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:11:58.479 20:22:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:11:58.479 20:22:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:11:58.479 20:22:42 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:11:58.479 20:22:42 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:11:58.479 { 00:11:58.479 "subsystems": [ 00:11:58.479 { 00:11:58.479 "subsystem": "bdev", 00:11:58.479 "config": [ 00:11:58.479 { 00:11:58.479 "params": { 00:11:58.479 "io_mechanism": "libaio", 00:11:58.479 "conserve_cpu": false, 00:11:58.479 "filename": "/dev/nvme0n1", 00:11:58.479 "name": "xnvme_bdev" 00:11:58.479 }, 00:11:58.479 "method": "bdev_xnvme_create" 00:11:58.479 }, 00:11:58.479 { 00:11:58.479 "method": "bdev_wait_for_examine" 00:11:58.479 } 00:11:58.479 ] 00:11:58.479 } 00:11:58.479 ] 00:11:58.479 } 00:11:58.479 [2024-12-12 20:22:42.433284] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:11:58.479 [2024-12-12 20:22:42.433404] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70980 ] 00:11:58.479 [2024-12-12 20:22:42.595171] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:58.479 [2024-12-12 20:22:42.695421] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:58.736 Running I/O for 5 seconds... 00:12:01.108 36315.00 IOPS, 141.86 MiB/s [2024-12-12T20:22:46.273Z] 35723.00 IOPS, 139.54 MiB/s [2024-12-12T20:22:47.212Z] 36561.33 IOPS, 142.82 MiB/s [2024-12-12T20:22:48.152Z] 36308.75 IOPS, 141.83 MiB/s 00:12:03.924 Latency(us) 00:12:03.924 [2024-12-12T20:22:48.152Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:03.924 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:12:03.924 xnvme_bdev : 5.00 36001.76 140.63 0.00 0.00 1773.13 194.56 6553.60 00:12:03.924 [2024-12-12T20:22:48.152Z] =================================================================================================================== 00:12:03.924 [2024-12-12T20:22:48.152Z] Total : 36001.76 140.63 0.00 0.00 1773.13 194.56 6553.60 00:12:04.865 00:12:04.865 real 0m12.745s 00:12:04.865 user 0m4.705s 00:12:04.865 sys 0m5.851s 00:12:04.866 20:22:48 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:04.866 20:22:48 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:04.866 ************************************ 00:12:04.866 END TEST xnvme_bdevperf 00:12:04.866 ************************************ 00:12:04.866 20:22:48 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:12:04.866 20:22:48 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:04.866 20:22:48 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:04.866 20:22:48 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:04.866 ************************************ 00:12:04.866 START TEST xnvme_fio_plugin 00:12:04.866 ************************************ 00:12:04.866 20:22:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:12:04.866 20:22:48 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:12:04.866 20:22:48 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:12:04.866 20:22:48 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:04.866 20:22:48 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:04.866 20:22:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:04.866 20:22:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:04.866 20:22:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:04.866 20:22:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:04.866 20:22:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:04.866 20:22:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:04.866 20:22:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:04.866 20:22:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:04.866 20:22:48 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:04.866 20:22:48 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:04.866 20:22:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:04.866 20:22:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:04.866 20:22:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:04.866 20:22:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:04.866 20:22:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:04.866 20:22:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:04.866 20:22:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:04.866 20:22:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:04.866 20:22:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:04.866 { 00:12:04.866 "subsystems": [ 00:12:04.866 { 00:12:04.866 "subsystem": "bdev", 00:12:04.866 "config": [ 00:12:04.866 { 00:12:04.866 "params": { 00:12:04.866 "io_mechanism": "libaio", 00:12:04.866 "conserve_cpu": false, 00:12:04.866 "filename": "/dev/nvme0n1", 00:12:04.866 "name": "xnvme_bdev" 00:12:04.866 }, 00:12:04.866 "method": "bdev_xnvme_create" 00:12:04.866 }, 00:12:04.866 { 00:12:04.866 "method": "bdev_wait_for_examine" 00:12:04.866 } 00:12:04.866 ] 00:12:04.866 } 00:12:04.866 ] 00:12:04.866 } 00:12:04.866 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:04.866 fio-3.35 00:12:04.866 Starting 1 thread 00:12:11.515 00:12:11.515 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71095: Thu Dec 12 20:22:54 2024 00:12:11.515 read: IOPS=40.2k, BW=157MiB/s (165MB/s)(786MiB/5001msec) 00:12:11.515 slat (usec): min=3, max=1487, avg=20.69, stdev=39.02 00:12:11.515 clat (usec): min=66, max=26195, avg=958.41, stdev=546.99 00:12:11.515 lat (usec): min=71, max=26218, avg=979.10, stdev=548.43 00:12:11.515 clat percentiles (usec): 00:12:11.515 | 1.00th=[ 180], 5.00th=[ 262], 10.00th=[ 355], 20.00th=[ 506], 00:12:11.515 | 30.00th=[ 644], 40.00th=[ 766], 50.00th=[ 881], 60.00th=[ 1004], 00:12:11.515 | 70.00th=[ 1139], 80.00th=[ 1303], 90.00th=[ 1598], 95.00th=[ 1958], 00:12:11.515 | 99.00th=[ 2900], 99.50th=[ 3195], 99.90th=[ 3720], 99.95th=[ 3949], 00:12:11.515 | 99.99th=[ 4359] 00:12:11.515 bw ( KiB/s): min=143272, max=176128, per=99.45%, avg=160012.44, stdev=12941.09, samples=9 00:12:11.515 iops : min=35818, max=44032, avg=40003.11, stdev=3235.27, samples=9 00:12:11.515 lat (usec) : 100=0.01%, 250=4.36%, 500=15.29%, 750=18.94%, 1000=21.29% 00:12:11.515 lat (msec) : 2=35.43%, 4=4.65%, 10=0.03%, 50=0.01% 00:12:11.515 cpu : usr=29.70%, sys=53.86%, ctx=50, majf=0, minf=764 00:12:11.515 IO depths : 1=0.3%, 2=1.4%, 4=4.7%, 8=11.3%, 16=24.8%, 32=55.6%, >=64=1.8% 00:12:11.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:11.515 complete : 0=0.0%, 4=98.2%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:12:11.515 issued rwts: total=201169,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:11.515 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:11.515 00:12:11.515 Run status group 0 (all jobs): 00:12:11.515 READ: bw=157MiB/s (165MB/s), 157MiB/s-157MiB/s (165MB/s-165MB/s), io=786MiB (824MB), run=5001-5001msec 00:12:11.515 ----------------------------------------------------- 00:12:11.515 Suppressions used: 00:12:11.515 count bytes template 00:12:11.515 1 11 /usr/src/fio/parse.c 00:12:11.515 1 8 libtcmalloc_minimal.so 00:12:11.515 1 904 libcrypto.so 00:12:11.515 ----------------------------------------------------- 00:12:11.515 00:12:11.515 20:22:55 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:11.515 20:22:55 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:11.515 20:22:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:11.515 20:22:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:11.515 20:22:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:11.515 20:22:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:11.515 20:22:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:11.515 20:22:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:11.515 20:22:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:11.515 20:22:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:11.515 20:22:55 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:11.515 20:22:55 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:11.515 20:22:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:11.515 20:22:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:11.515 20:22:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:11.515 20:22:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:11.774 20:22:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:11.774 20:22:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:11.774 20:22:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:11.774 20:22:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:11.774 20:22:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:11.774 { 00:12:11.774 "subsystems": [ 00:12:11.774 { 00:12:11.774 "subsystem": "bdev", 00:12:11.774 "config": [ 00:12:11.774 { 00:12:11.774 "params": { 00:12:11.774 "io_mechanism": "libaio", 00:12:11.774 "conserve_cpu": false, 00:12:11.774 "filename": "/dev/nvme0n1", 00:12:11.774 "name": "xnvme_bdev" 00:12:11.774 }, 00:12:11.774 "method": "bdev_xnvme_create" 00:12:11.774 }, 00:12:11.774 { 00:12:11.774 "method": "bdev_wait_for_examine" 00:12:11.774 } 00:12:11.774 ] 00:12:11.774 } 00:12:11.774 ] 00:12:11.774 } 00:12:11.774 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:11.774 fio-3.35 00:12:11.774 Starting 1 thread 00:12:18.332 00:12:18.332 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71191: Thu Dec 12 20:23:01 2024 00:12:18.332 write: IOPS=43.0k, BW=168MiB/s (176MB/s)(840MiB/5001msec); 0 zone resets 00:12:18.332 slat (usec): min=3, max=800, avg=19.92, stdev=28.28 00:12:18.332 clat (usec): min=29, max=5240, avg=872.91, stdev=526.10 00:12:18.332 lat (usec): min=120, max=5298, avg=892.83, stdev=529.40 00:12:18.332 clat percentiles (usec): 00:12:18.332 | 1.00th=[ 178], 5.00th=[ 249], 10.00th=[ 322], 20.00th=[ 445], 00:12:18.332 | 30.00th=[ 553], 40.00th=[ 668], 50.00th=[ 783], 60.00th=[ 898], 00:12:18.332 | 70.00th=[ 1029], 80.00th=[ 1188], 90.00th=[ 1483], 95.00th=[ 1876], 00:12:18.332 | 99.00th=[ 2802], 99.50th=[ 3130], 99.90th=[ 3785], 99.95th=[ 4047], 00:12:18.332 | 99.99th=[ 4686] 00:12:18.332 bw ( KiB/s): min=160063, max=181704, per=99.22%, avg=170658.56, stdev=6952.73, samples=9 00:12:18.332 iops : min=40015, max=45426, avg=42664.56, stdev=1738.32, samples=9 00:12:18.332 lat (usec) : 50=0.01%, 100=0.01%, 250=5.01%, 500=19.74%, 750=22.50% 00:12:18.332 lat (usec) : 1000=20.48% 00:12:18.332 lat (msec) : 2=28.29%, 4=3.91%, 10=0.06% 00:12:18.332 cpu : usr=27.04%, sys=54.82%, ctx=122, majf=0, minf=765 00:12:18.332 IO depths : 1=0.2%, 2=1.4%, 4=4.7%, 8=11.3%, 16=25.1%, 32=55.6%, >=64=1.8% 00:12:18.332 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:18.332 complete : 0=0.0%, 4=98.2%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:12:18.332 issued rwts: total=0,215046,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:18.332 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:18.332 00:12:18.332 Run status group 0 (all jobs): 00:12:18.332 WRITE: bw=168MiB/s (176MB/s), 168MiB/s-168MiB/s (176MB/s-176MB/s), io=840MiB (881MB), run=5001-5001msec 00:12:18.332 ----------------------------------------------------- 00:12:18.332 Suppressions used: 00:12:18.332 count bytes template 00:12:18.332 1 11 /usr/src/fio/parse.c 00:12:18.332 1 8 libtcmalloc_minimal.so 00:12:18.332 1 904 libcrypto.so 00:12:18.332 ----------------------------------------------------- 00:12:18.332 00:12:18.591 ************************************ 00:12:18.591 END TEST xnvme_fio_plugin 00:12:18.591 ************************************ 00:12:18.591 00:12:18.591 real 0m13.774s 00:12:18.591 user 0m5.629s 00:12:18.591 sys 0m6.020s 00:12:18.591 20:23:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:18.591 20:23:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:18.591 20:23:02 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:12:18.591 20:23:02 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:12:18.591 20:23:02 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:12:18.591 20:23:02 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:12:18.591 20:23:02 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:18.591 20:23:02 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:18.591 20:23:02 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:18.591 ************************************ 00:12:18.591 START TEST xnvme_rpc 00:12:18.591 ************************************ 00:12:18.591 20:23:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:12:18.591 20:23:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:12:18.591 20:23:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:12:18.591 20:23:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:12:18.591 20:23:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:12:18.591 20:23:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71273 00:12:18.591 20:23:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71273 00:12:18.591 20:23:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71273 ']' 00:12:18.591 20:23:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:18.591 20:23:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:18.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:18.591 20:23:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:18.591 20:23:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:18.591 20:23:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.591 20:23:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:18.591 [2024-12-12 20:23:02.686854] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:12:18.591 [2024-12-12 20:23:02.686986] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71273 ] 00:12:18.849 [2024-12-12 20:23:02.848939] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:18.849 [2024-12-12 20:23:02.950629] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:19.421 20:23:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:19.421 20:23:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:19.421 20:23:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:12:19.421 20:23:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.421 20:23:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.421 xnvme_bdev 00:12:19.421 20:23:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.421 20:23:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:12:19.421 20:23:03 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:19.421 20:23:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.421 20:23:03 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:12:19.421 20:23:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.421 20:23:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.421 20:23:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:12:19.421 20:23:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:12:19.421 20:23:03 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:19.421 20:23:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.421 20:23:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.421 20:23:03 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:12:19.421 20:23:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.683 20:23:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:12:19.683 20:23:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:12:19.683 20:23:03 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:19.683 20:23:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.683 20:23:03 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:12:19.683 20:23:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.683 20:23:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.683 20:23:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:12:19.683 20:23:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:12:19.683 20:23:03 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:19.683 20:23:03 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:12:19.683 20:23:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.683 20:23:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.683 20:23:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.683 20:23:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:12:19.683 20:23:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:12:19.683 20:23:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.683 20:23:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.683 20:23:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.683 20:23:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71273 00:12:19.683 20:23:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71273 ']' 00:12:19.683 20:23:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71273 00:12:19.683 20:23:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:12:19.683 20:23:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:19.683 20:23:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71273 00:12:19.683 20:23:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:19.683 20:23:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:19.683 killing process with pid 71273 00:12:19.683 20:23:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71273' 00:12:19.683 20:23:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71273 00:12:19.683 20:23:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71273 00:12:21.586 00:12:21.586 real 0m2.692s 00:12:21.586 user 0m2.801s 00:12:21.586 sys 0m0.360s 00:12:21.586 20:23:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:21.586 20:23:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.586 ************************************ 00:12:21.586 END TEST xnvme_rpc 00:12:21.586 ************************************ 00:12:21.586 20:23:05 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:21.586 20:23:05 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:21.586 20:23:05 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:21.586 20:23:05 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:21.586 ************************************ 00:12:21.586 START TEST xnvme_bdevperf 00:12:21.586 ************************************ 00:12:21.586 20:23:05 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:12:21.586 20:23:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:12:21.586 20:23:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:12:21.586 20:23:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:21.586 20:23:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:12:21.586 20:23:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:21.586 20:23:05 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:21.586 20:23:05 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:21.586 { 00:12:21.586 "subsystems": [ 00:12:21.586 { 00:12:21.586 "subsystem": "bdev", 00:12:21.586 "config": [ 00:12:21.586 { 00:12:21.586 "params": { 00:12:21.586 "io_mechanism": "libaio", 00:12:21.586 "conserve_cpu": true, 00:12:21.586 "filename": "/dev/nvme0n1", 00:12:21.586 "name": "xnvme_bdev" 00:12:21.586 }, 00:12:21.586 "method": "bdev_xnvme_create" 00:12:21.586 }, 00:12:21.586 { 00:12:21.586 "method": "bdev_wait_for_examine" 00:12:21.586 } 00:12:21.586 ] 00:12:21.586 } 00:12:21.587 ] 00:12:21.587 } 00:12:21.587 [2024-12-12 20:23:05.400773] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:12:21.587 [2024-12-12 20:23:05.400885] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71341 ] 00:12:21.587 [2024-12-12 20:23:05.558114] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:21.587 [2024-12-12 20:23:05.663365] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:21.892 Running I/O for 5 seconds... 00:12:23.780 36554.00 IOPS, 142.79 MiB/s [2024-12-12T20:23:08.944Z] 37655.50 IOPS, 147.09 MiB/s [2024-12-12T20:23:10.317Z] 37842.33 IOPS, 147.82 MiB/s [2024-12-12T20:23:11.251Z] 38066.25 IOPS, 148.70 MiB/s 00:12:27.023 Latency(us) 00:12:27.023 [2024-12-12T20:23:11.251Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:27.023 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:27.023 xnvme_bdev : 5.00 38554.47 150.60 0.00 0.00 1655.70 170.14 69770.63 00:12:27.023 [2024-12-12T20:23:11.251Z] =================================================================================================================== 00:12:27.023 [2024-12-12T20:23:11.251Z] Total : 38554.47 150.60 0.00 0.00 1655.70 170.14 69770.63 00:12:27.588 20:23:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:27.588 20:23:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:12:27.588 20:23:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:27.588 20:23:11 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:27.588 20:23:11 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:27.588 { 00:12:27.588 "subsystems": [ 00:12:27.588 { 00:12:27.588 "subsystem": "bdev", 00:12:27.588 "config": [ 00:12:27.588 { 00:12:27.588 "params": { 00:12:27.588 "io_mechanism": "libaio", 00:12:27.588 "conserve_cpu": true, 00:12:27.588 "filename": "/dev/nvme0n1", 00:12:27.588 "name": "xnvme_bdev" 00:12:27.588 }, 00:12:27.588 "method": "bdev_xnvme_create" 00:12:27.588 }, 00:12:27.588 { 00:12:27.588 "method": "bdev_wait_for_examine" 00:12:27.588 } 00:12:27.588 ] 00:12:27.588 } 00:12:27.588 ] 00:12:27.588 } 00:12:27.588 [2024-12-12 20:23:11.757394] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:12:27.588 [2024-12-12 20:23:11.757518] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71422 ] 00:12:27.845 [2024-12-12 20:23:11.911483] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:27.845 [2024-12-12 20:23:12.015669] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:28.103 Running I/O for 5 seconds... 00:12:30.409 36269.00 IOPS, 141.68 MiB/s [2024-12-12T20:23:15.572Z] 36372.00 IOPS, 142.08 MiB/s [2024-12-12T20:23:16.509Z] 37014.67 IOPS, 144.59 MiB/s [2024-12-12T20:23:17.441Z] 36602.50 IOPS, 142.98 MiB/s 00:12:33.213 Latency(us) 00:12:33.213 [2024-12-12T20:23:17.441Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:33.213 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:12:33.213 xnvme_bdev : 5.00 36515.26 142.64 0.00 0.00 1747.91 178.02 5318.50 00:12:33.213 [2024-12-12T20:23:17.441Z] =================================================================================================================== 00:12:33.213 [2024-12-12T20:23:17.441Z] Total : 36515.26 142.64 0.00 0.00 1747.91 178.02 5318.50 00:12:34.144 00:12:34.144 real 0m12.748s 00:12:34.144 user 0m4.567s 00:12:34.144 sys 0m5.684s 00:12:34.144 ************************************ 00:12:34.144 END TEST xnvme_bdevperf 00:12:34.144 ************************************ 00:12:34.144 20:23:18 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:34.144 20:23:18 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:34.144 20:23:18 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:12:34.144 20:23:18 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:34.144 20:23:18 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:34.144 20:23:18 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:34.144 ************************************ 00:12:34.144 START TEST xnvme_fio_plugin 00:12:34.144 ************************************ 00:12:34.144 20:23:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:12:34.144 20:23:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:12:34.144 20:23:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:12:34.144 20:23:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:34.144 20:23:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:34.144 20:23:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:34.144 20:23:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:34.144 20:23:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:34.144 20:23:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:34.144 20:23:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:34.144 20:23:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:34.144 20:23:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:34.144 20:23:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:34.144 20:23:18 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:34.144 20:23:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:34.144 20:23:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:34.145 20:23:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:34.145 20:23:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:34.145 20:23:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:34.145 20:23:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:34.145 20:23:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:34.145 20:23:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:34.145 20:23:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:34.145 20:23:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:34.145 { 00:12:34.145 "subsystems": [ 00:12:34.145 { 00:12:34.145 "subsystem": "bdev", 00:12:34.145 "config": [ 00:12:34.145 { 00:12:34.145 "params": { 00:12:34.145 "io_mechanism": "libaio", 00:12:34.145 "conserve_cpu": true, 00:12:34.145 "filename": "/dev/nvme0n1", 00:12:34.145 "name": "xnvme_bdev" 00:12:34.145 }, 00:12:34.145 "method": "bdev_xnvme_create" 00:12:34.145 }, 00:12:34.145 { 00:12:34.145 "method": "bdev_wait_for_examine" 00:12:34.145 } 00:12:34.145 ] 00:12:34.145 } 00:12:34.145 ] 00:12:34.145 } 00:12:34.145 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:34.145 fio-3.35 00:12:34.145 Starting 1 thread 00:12:40.760 00:12:40.760 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71537: Thu Dec 12 20:23:24 2024 00:12:40.760 read: IOPS=49.5k, BW=193MiB/s (203MB/s)(967MiB/5001msec) 00:12:40.760 slat (usec): min=3, max=6920, avg=16.99, stdev=33.99 00:12:40.760 clat (usec): min=81, max=7543, avg=790.19, stdev=477.75 00:12:40.760 lat (usec): min=141, max=7584, avg=807.18, stdev=480.55 00:12:40.760 clat percentiles (usec): 00:12:40.760 | 1.00th=[ 172], 5.00th=[ 247], 10.00th=[ 318], 20.00th=[ 441], 00:12:40.760 | 30.00th=[ 537], 40.00th=[ 627], 50.00th=[ 709], 60.00th=[ 799], 00:12:40.760 | 70.00th=[ 898], 80.00th=[ 1037], 90.00th=[ 1270], 95.00th=[ 1598], 00:12:40.760 | 99.00th=[ 2704], 99.50th=[ 3097], 99.90th=[ 3720], 99.95th=[ 4080], 00:12:40.760 | 99.99th=[ 7439] 00:12:40.760 bw ( KiB/s): min=182320, max=224128, per=100.00%, avg=198757.33, stdev=13432.20, samples=9 00:12:40.760 iops : min=45580, max=56032, avg=49689.33, stdev=3358.05, samples=9 00:12:40.760 lat (usec) : 100=0.01%, 250=5.21%, 500=20.58%, 750=28.97%, 1000=22.90% 00:12:40.760 lat (msec) : 2=19.50%, 4=2.77%, 10=0.05% 00:12:40.760 cpu : usr=31.58%, sys=52.40%, ctx=88, majf=0, minf=764 00:12:40.760 IO depths : 1=0.2%, 2=1.2%, 4=4.2%, 8=10.7%, 16=24.7%, 32=56.9%, >=64=1.9% 00:12:40.760 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:40.760 complete : 0=0.0%, 4=98.2%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:12:40.760 issued rwts: total=247528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:40.760 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:40.760 00:12:40.760 Run status group 0 (all jobs): 00:12:40.760 READ: bw=193MiB/s (203MB/s), 193MiB/s-193MiB/s (203MB/s-203MB/s), io=967MiB (1014MB), run=5001-5001msec 00:12:41.021 ----------------------------------------------------- 00:12:41.021 Suppressions used: 00:12:41.021 count bytes template 00:12:41.021 1 11 /usr/src/fio/parse.c 00:12:41.021 1 8 libtcmalloc_minimal.so 00:12:41.021 1 904 libcrypto.so 00:12:41.021 ----------------------------------------------------- 00:12:41.021 00:12:41.021 20:23:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:41.021 20:23:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:41.021 20:23:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:41.021 20:23:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:41.021 20:23:25 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:41.021 20:23:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:41.021 20:23:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:41.021 20:23:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:41.021 20:23:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:41.021 20:23:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:41.021 20:23:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:41.021 20:23:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:41.021 20:23:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:41.021 20:23:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:41.021 20:23:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:41.021 20:23:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:41.021 20:23:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:41.021 20:23:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:41.021 20:23:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:41.021 20:23:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:41.021 20:23:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:41.021 { 00:12:41.021 "subsystems": [ 00:12:41.021 { 00:12:41.021 "subsystem": "bdev", 00:12:41.021 "config": [ 00:12:41.021 { 00:12:41.021 "params": { 00:12:41.021 "io_mechanism": "libaio", 00:12:41.021 "conserve_cpu": true, 00:12:41.021 "filename": "/dev/nvme0n1", 00:12:41.022 "name": "xnvme_bdev" 00:12:41.022 }, 00:12:41.022 "method": "bdev_xnvme_create" 00:12:41.022 }, 00:12:41.022 { 00:12:41.022 "method": "bdev_wait_for_examine" 00:12:41.022 } 00:12:41.022 ] 00:12:41.022 } 00:12:41.022 ] 00:12:41.022 } 00:12:41.022 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:41.022 fio-3.35 00:12:41.022 Starting 1 thread 00:12:47.601 00:12:47.601 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71627: Thu Dec 12 20:23:30 2024 00:12:47.601 write: IOPS=43.9k, BW=171MiB/s (180MB/s)(857MiB/5001msec); 0 zone resets 00:12:47.601 slat (usec): min=3, max=1748, avg=18.76, stdev=43.24 00:12:47.601 clat (usec): min=52, max=80988, avg=904.71, stdev=1416.03 00:12:47.601 lat (usec): min=135, max=81002, avg=923.47, stdev=1417.02 00:12:47.601 clat percentiles (usec): 00:12:47.601 | 1.00th=[ 176], 5.00th=[ 258], 10.00th=[ 334], 20.00th=[ 461], 00:12:47.601 | 30.00th=[ 570], 40.00th=[ 668], 50.00th=[ 775], 60.00th=[ 889], 00:12:47.601 | 70.00th=[ 1012], 80.00th=[ 1188], 90.00th=[ 1500], 95.00th=[ 1942], 00:12:47.601 | 99.00th=[ 2966], 99.50th=[ 3294], 99.90th=[ 3982], 99.95th=[ 5145], 00:12:47.601 | 99.99th=[79168] 00:12:47.601 bw ( KiB/s): min=126240, max=196312, per=99.10%, avg=173852.44, stdev=21446.18, samples=9 00:12:47.601 iops : min=31560, max=49078, avg=43463.11, stdev=5361.54, samples=9 00:12:47.601 lat (usec) : 100=0.01%, 250=4.59%, 500=18.86%, 750=24.33%, 1000=21.02% 00:12:47.601 lat (msec) : 2=26.56%, 4=4.54%, 10=0.06%, 20=0.01%, 100=0.03% 00:12:47.601 cpu : usr=31.82%, sys=53.38%, ctx=108, majf=0, minf=765 00:12:47.601 IO depths : 1=0.2%, 2=1.2%, 4=4.2%, 8=10.8%, 16=25.2%, 32=56.5%, >=64=1.9% 00:12:47.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:47.601 complete : 0=0.0%, 4=98.2%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:12:47.601 issued rwts: total=0,219334,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:47.601 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:47.601 00:12:47.601 Run status group 0 (all jobs): 00:12:47.601 WRITE: bw=171MiB/s (180MB/s), 171MiB/s-171MiB/s (180MB/s-180MB/s), io=857MiB (898MB), run=5001-5001msec 00:12:47.862 ----------------------------------------------------- 00:12:47.862 Suppressions used: 00:12:47.862 count bytes template 00:12:47.862 1 11 /usr/src/fio/parse.c 00:12:47.862 1 8 libtcmalloc_minimal.so 00:12:47.862 1 904 libcrypto.so 00:12:47.862 ----------------------------------------------------- 00:12:47.862 00:12:47.862 00:12:47.862 real 0m13.743s 00:12:47.862 user 0m5.978s 00:12:47.862 sys 0m5.830s 00:12:47.862 ************************************ 00:12:47.862 END TEST xnvme_fio_plugin 00:12:47.862 ************************************ 00:12:47.862 20:23:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:47.862 20:23:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:47.862 20:23:31 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:12:47.862 20:23:31 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:47.862 20:23:31 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:12:47.862 20:23:31 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:12:47.862 20:23:31 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:12:47.862 20:23:31 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:12:47.862 20:23:31 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:12:47.862 20:23:31 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:12:47.862 20:23:31 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:12:47.862 20:23:31 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:47.862 20:23:31 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:47.862 20:23:31 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:47.862 ************************************ 00:12:47.862 START TEST xnvme_rpc 00:12:47.862 ************************************ 00:12:47.862 20:23:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:12:47.862 20:23:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:12:47.862 20:23:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:12:47.862 20:23:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:12:47.862 20:23:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:12:47.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:47.862 20:23:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71715 00:12:47.862 20:23:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71715 00:12:47.862 20:23:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71715 ']' 00:12:47.862 20:23:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:47.862 20:23:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:47.862 20:23:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:47.862 20:23:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:47.862 20:23:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:47.862 20:23:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.862 [2024-12-12 20:23:32.038566] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:12:47.862 [2024-12-12 20:23:32.039036] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71715 ] 00:12:48.122 [2024-12-12 20:23:32.204362] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:48.123 [2024-12-12 20:23:32.346147] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:49.065 20:23:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:49.065 20:23:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:49.065 20:23:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:12:49.065 20:23:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.065 20:23:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.065 xnvme_bdev 00:12:49.065 20:23:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.065 20:23:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:12:49.065 20:23:33 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:49.065 20:23:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.065 20:23:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.065 20:23:33 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:12:49.065 20:23:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.065 20:23:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:12:49.065 20:23:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:12:49.065 20:23:33 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:49.065 20:23:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.065 20:23:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.065 20:23:33 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:12:49.065 20:23:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.065 20:23:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:12:49.065 20:23:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:12:49.065 20:23:33 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:49.065 20:23:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.065 20:23:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.065 20:23:33 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:12:49.065 20:23:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.065 20:23:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:12:49.065 20:23:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:12:49.065 20:23:33 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:49.065 20:23:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.065 20:23:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.065 20:23:33 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:12:49.065 20:23:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.065 20:23:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:12:49.065 20:23:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:12:49.065 20:23:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.065 20:23:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.065 20:23:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.065 20:23:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71715 00:12:49.065 20:23:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71715 ']' 00:12:49.065 20:23:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71715 00:12:49.065 20:23:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:12:49.065 20:23:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:49.065 20:23:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71715 00:12:49.327 killing process with pid 71715 00:12:49.327 20:23:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:49.327 20:23:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:49.327 20:23:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71715' 00:12:49.327 20:23:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71715 00:12:49.327 20:23:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71715 00:12:51.242 00:12:51.242 real 0m3.045s 00:12:51.242 user 0m3.060s 00:12:51.242 sys 0m0.489s 00:12:51.242 20:23:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:51.242 ************************************ 00:12:51.242 END TEST xnvme_rpc 00:12:51.242 ************************************ 00:12:51.242 20:23:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.242 20:23:35 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:51.242 20:23:35 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:51.242 20:23:35 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:51.242 20:23:35 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:51.242 ************************************ 00:12:51.242 START TEST xnvme_bdevperf 00:12:51.242 ************************************ 00:12:51.242 20:23:35 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:12:51.242 20:23:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:12:51.242 20:23:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:12:51.242 20:23:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:51.242 20:23:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:12:51.242 20:23:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:51.242 20:23:35 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:51.242 20:23:35 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:51.242 { 00:12:51.242 "subsystems": [ 00:12:51.242 { 00:12:51.242 "subsystem": "bdev", 00:12:51.242 "config": [ 00:12:51.242 { 00:12:51.242 "params": { 00:12:51.242 "io_mechanism": "io_uring", 00:12:51.242 "conserve_cpu": false, 00:12:51.242 "filename": "/dev/nvme0n1", 00:12:51.242 "name": "xnvme_bdev" 00:12:51.242 }, 00:12:51.242 "method": "bdev_xnvme_create" 00:12:51.242 }, 00:12:51.242 { 00:12:51.242 "method": "bdev_wait_for_examine" 00:12:51.242 } 00:12:51.242 ] 00:12:51.242 } 00:12:51.242 ] 00:12:51.242 } 00:12:51.242 [2024-12-12 20:23:35.129901] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:12:51.242 [2024-12-12 20:23:35.130063] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71789 ] 00:12:51.242 [2024-12-12 20:23:35.293561] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:51.242 [2024-12-12 20:23:35.436038] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:51.816 Running I/O for 5 seconds... 00:12:53.701 31167.00 IOPS, 121.75 MiB/s [2024-12-12T20:23:38.874Z] 31709.50 IOPS, 123.87 MiB/s [2024-12-12T20:23:39.819Z] 31519.33 IOPS, 123.12 MiB/s [2024-12-12T20:23:40.762Z] 31363.75 IOPS, 122.51 MiB/s 00:12:56.534 Latency(us) 00:12:56.534 [2024-12-12T20:23:40.762Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:56.534 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:56.534 xnvme_bdev : 5.00 31457.25 122.88 0.00 0.00 2030.18 1335.93 10284.11 00:12:56.534 [2024-12-12T20:23:40.762Z] =================================================================================================================== 00:12:56.534 [2024-12-12T20:23:40.762Z] Total : 31457.25 122.88 0.00 0.00 2030.18 1335.93 10284.11 00:12:57.477 20:23:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:57.477 20:23:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:12:57.477 20:23:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:57.477 20:23:41 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:57.477 20:23:41 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:57.477 { 00:12:57.477 "subsystems": [ 00:12:57.477 { 00:12:57.477 "subsystem": "bdev", 00:12:57.477 "config": [ 00:12:57.477 { 00:12:57.477 "params": { 00:12:57.477 "io_mechanism": "io_uring", 00:12:57.477 "conserve_cpu": false, 00:12:57.477 "filename": "/dev/nvme0n1", 00:12:57.477 "name": "xnvme_bdev" 00:12:57.477 }, 00:12:57.477 "method": "bdev_xnvme_create" 00:12:57.477 }, 00:12:57.477 { 00:12:57.477 "method": "bdev_wait_for_examine" 00:12:57.477 } 00:12:57.477 ] 00:12:57.477 } 00:12:57.477 ] 00:12:57.477 } 00:12:57.477 [2024-12-12 20:23:41.672630] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:12:57.477 [2024-12-12 20:23:41.672790] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71864 ] 00:12:57.738 [2024-12-12 20:23:41.841116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:57.998 [2024-12-12 20:23:41.981486] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:58.258 Running I/O for 5 seconds... 00:13:00.140 31579.00 IOPS, 123.36 MiB/s [2024-12-12T20:23:45.315Z] 31391.00 IOPS, 122.62 MiB/s [2024-12-12T20:23:46.701Z] 31388.00 IOPS, 122.61 MiB/s [2024-12-12T20:23:47.644Z] 31415.25 IOPS, 122.72 MiB/s 00:13:03.416 Latency(us) 00:13:03.416 [2024-12-12T20:23:47.644Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:03.416 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:03.416 xnvme_bdev : 5.00 31446.23 122.84 0.00 0.00 2030.75 371.79 8822.15 00:13:03.416 [2024-12-12T20:23:47.644Z] =================================================================================================================== 00:13:03.416 [2024-12-12T20:23:47.644Z] Total : 31446.23 122.84 0.00 0.00 2030.75 371.79 8822.15 00:13:03.987 00:13:03.987 real 0m13.067s 00:13:03.987 user 0m6.178s 00:13:03.987 sys 0m6.590s 00:13:03.987 ************************************ 00:13:03.987 END TEST xnvme_bdevperf 00:13:03.987 ************************************ 00:13:03.987 20:23:48 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:03.987 20:23:48 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:03.987 20:23:48 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:03.987 20:23:48 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:03.987 20:23:48 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:03.987 20:23:48 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:03.987 ************************************ 00:13:03.987 START TEST xnvme_fio_plugin 00:13:03.987 ************************************ 00:13:03.987 20:23:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:03.987 20:23:48 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:03.987 20:23:48 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:13:03.987 20:23:48 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:03.987 20:23:48 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:03.987 20:23:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:03.987 20:23:48 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:03.987 20:23:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:03.987 20:23:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:03.987 20:23:48 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:03.987 20:23:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:03.987 20:23:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:03.987 20:23:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:03.987 20:23:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:03.987 20:23:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:03.987 20:23:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:03.987 20:23:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:03.987 20:23:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:03.987 20:23:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:03.987 20:23:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:03.987 20:23:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:03.987 20:23:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:03.987 20:23:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:03.987 20:23:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:04.245 { 00:13:04.245 "subsystems": [ 00:13:04.245 { 00:13:04.245 "subsystem": "bdev", 00:13:04.245 "config": [ 00:13:04.245 { 00:13:04.245 "params": { 00:13:04.245 "io_mechanism": "io_uring", 00:13:04.245 "conserve_cpu": false, 00:13:04.245 "filename": "/dev/nvme0n1", 00:13:04.245 "name": "xnvme_bdev" 00:13:04.245 }, 00:13:04.245 "method": "bdev_xnvme_create" 00:13:04.245 }, 00:13:04.245 { 00:13:04.245 "method": "bdev_wait_for_examine" 00:13:04.245 } 00:13:04.245 ] 00:13:04.245 } 00:13:04.245 ] 00:13:04.245 } 00:13:04.245 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:04.245 fio-3.35 00:13:04.245 Starting 1 thread 00:13:10.826 00:13:10.826 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71978: Thu Dec 12 20:23:54 2024 00:13:10.826 read: IOPS=34.8k, BW=136MiB/s (142MB/s)(679MiB/5001msec) 00:13:10.826 slat (usec): min=2, max=142, avg= 3.83, stdev= 2.14 00:13:10.826 clat (usec): min=871, max=5856, avg=1685.78, stdev=294.30 00:13:10.826 lat (usec): min=874, max=5862, avg=1689.60, stdev=294.71 00:13:10.826 clat percentiles (usec): 00:13:10.826 | 1.00th=[ 1106], 5.00th=[ 1237], 10.00th=[ 1319], 20.00th=[ 1434], 00:13:10.826 | 30.00th=[ 1516], 40.00th=[ 1598], 50.00th=[ 1663], 60.00th=[ 1745], 00:13:10.826 | 70.00th=[ 1827], 80.00th=[ 1926], 90.00th=[ 2073], 95.00th=[ 2212], 00:13:10.826 | 99.00th=[ 2474], 99.50th=[ 2573], 99.90th=[ 2769], 99.95th=[ 2868], 00:13:10.826 | 99.99th=[ 3228] 00:13:10.826 bw ( KiB/s): min=133120, max=147456, per=99.80%, avg=138808.89, stdev=4459.43, samples=9 00:13:10.826 iops : min=33280, max=36864, avg=34702.22, stdev=1114.86, samples=9 00:13:10.826 lat (usec) : 1000=0.13% 00:13:10.826 lat (msec) : 2=85.71%, 4=14.16%, 10=0.01% 00:13:10.826 cpu : usr=32.58%, sys=66.14%, ctx=64, majf=0, minf=762 00:13:10.826 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:13:10.826 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:10.826 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:13:10.826 issued rwts: total=173886,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:10.826 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:10.826 00:13:10.826 Run status group 0 (all jobs): 00:13:10.826 READ: bw=136MiB/s (142MB/s), 136MiB/s-136MiB/s (142MB/s-142MB/s), io=679MiB (712MB), run=5001-5001msec 00:13:10.826 ----------------------------------------------------- 00:13:10.826 Suppressions used: 00:13:10.826 count bytes template 00:13:10.826 1 11 /usr/src/fio/parse.c 00:13:10.826 1 8 libtcmalloc_minimal.so 00:13:10.826 1 904 libcrypto.so 00:13:10.826 ----------------------------------------------------- 00:13:10.826 00:13:10.826 20:23:54 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:10.826 20:23:54 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:10.826 20:23:54 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:10.826 20:23:54 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:10.826 20:23:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:10.826 20:23:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:10.826 20:23:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:10.826 20:23:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:10.826 20:23:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:10.826 20:23:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:10.826 20:23:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:10.826 20:23:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:10.826 20:23:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:10.826 20:23:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:10.826 20:23:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:10.826 20:23:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:10.826 20:23:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:10.826 20:23:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:10.826 20:23:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:10.826 20:23:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:10.826 20:23:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:10.826 { 00:13:10.826 "subsystems": [ 00:13:10.826 { 00:13:10.826 "subsystem": "bdev", 00:13:10.826 "config": [ 00:13:10.826 { 00:13:10.826 "params": { 00:13:10.826 "io_mechanism": "io_uring", 00:13:10.826 "conserve_cpu": false, 00:13:10.826 "filename": "/dev/nvme0n1", 00:13:10.826 "name": "xnvme_bdev" 00:13:10.826 }, 00:13:10.826 "method": "bdev_xnvme_create" 00:13:10.826 }, 00:13:10.826 { 00:13:10.826 "method": "bdev_wait_for_examine" 00:13:10.826 } 00:13:10.826 ] 00:13:10.826 } 00:13:10.826 ] 00:13:10.826 } 00:13:11.086 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:11.086 fio-3.35 00:13:11.086 Starting 1 thread 00:13:17.686 00:13:17.686 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72071: Thu Dec 12 20:24:00 2024 00:13:17.686 write: IOPS=33.1k, BW=129MiB/s (136MB/s)(647MiB/5001msec); 0 zone resets 00:13:17.686 slat (usec): min=2, max=505, avg= 3.85, stdev= 2.40 00:13:17.686 clat (usec): min=512, max=124570, avg=1777.83, stdev=2385.94 00:13:17.686 lat (usec): min=516, max=124573, avg=1781.67, stdev=2386.00 00:13:17.686 clat percentiles (usec): 00:13:17.686 | 1.00th=[ 1106], 5.00th=[ 1270], 10.00th=[ 1352], 20.00th=[ 1483], 00:13:17.686 | 30.00th=[ 1565], 40.00th=[ 1631], 50.00th=[ 1713], 60.00th=[ 1778], 00:13:17.686 | 70.00th=[ 1860], 80.00th=[ 1958], 90.00th=[ 2114], 95.00th=[ 2278], 00:13:17.686 | 99.00th=[ 2606], 99.50th=[ 2769], 99.90th=[ 3654], 99.95th=[ 7046], 00:13:17.686 | 99.99th=[124257] 00:13:17.686 bw ( KiB/s): min=108008, max=146944, per=99.36%, avg=131537.78, stdev=13847.17, samples=9 00:13:17.686 iops : min=27002, max=36736, avg=32884.44, stdev=3461.79, samples=9 00:13:17.686 lat (usec) : 750=0.01%, 1000=0.16% 00:13:17.686 lat (msec) : 2=83.14%, 4=16.59%, 10=0.05%, 250=0.04% 00:13:17.686 cpu : usr=33.08%, sys=65.64%, ctx=20, majf=0, minf=763 00:13:17.686 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.4%, 16=24.9%, 32=50.2%, >=64=1.6% 00:13:17.686 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:17.686 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:13:17.686 issued rwts: total=0,165510,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:17.686 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:17.686 00:13:17.686 Run status group 0 (all jobs): 00:13:17.686 WRITE: bw=129MiB/s (136MB/s), 129MiB/s-129MiB/s (136MB/s-136MB/s), io=647MiB (678MB), run=5001-5001msec 00:13:17.686 ----------------------------------------------------- 00:13:17.686 Suppressions used: 00:13:17.686 count bytes template 00:13:17.686 1 11 /usr/src/fio/parse.c 00:13:17.686 1 8 libtcmalloc_minimal.so 00:13:17.686 1 904 libcrypto.so 00:13:17.686 ----------------------------------------------------- 00:13:17.686 00:13:17.686 ************************************ 00:13:17.686 END TEST xnvme_fio_plugin 00:13:17.686 ************************************ 00:13:17.686 00:13:17.686 real 0m13.553s 00:13:17.686 user 0m6.053s 00:13:17.686 sys 0m7.065s 00:13:17.686 20:24:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:17.686 20:24:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:17.686 20:24:01 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:17.686 20:24:01 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:13:17.686 20:24:01 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:13:17.686 20:24:01 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:17.686 20:24:01 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:17.686 20:24:01 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:17.686 20:24:01 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:17.686 ************************************ 00:13:17.686 START TEST xnvme_rpc 00:13:17.686 ************************************ 00:13:17.686 20:24:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:17.686 20:24:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:17.686 20:24:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:17.686 20:24:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:17.686 20:24:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:17.686 20:24:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=72156 00:13:17.686 20:24:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 72156 00:13:17.686 20:24:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 72156 ']' 00:13:17.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:17.686 20:24:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:17.686 20:24:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:17.686 20:24:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:17.686 20:24:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:17.686 20:24:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.686 20:24:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:17.686 [2024-12-12 20:24:01.876739] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:13:17.686 [2024-12-12 20:24:01.876867] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72156 ] 00:13:17.946 [2024-12-12 20:24:02.037527] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:17.946 [2024-12-12 20:24:02.140403] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:18.885 20:24:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:18.885 20:24:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:18.885 20:24:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:13:18.885 20:24:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.885 20:24:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.885 xnvme_bdev 00:13:18.885 20:24:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.885 20:24:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:18.885 20:24:02 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:18.885 20:24:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.885 20:24:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.885 20:24:02 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:18.885 20:24:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.885 20:24:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:18.885 20:24:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:18.885 20:24:02 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:18.885 20:24:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.885 20:24:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.885 20:24:02 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:18.885 20:24:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.885 20:24:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:13:18.885 20:24:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:18.885 20:24:02 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:18.885 20:24:02 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:18.885 20:24:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.885 20:24:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.885 20:24:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.885 20:24:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:13:18.885 20:24:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:18.885 20:24:02 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:18.885 20:24:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.885 20:24:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.885 20:24:02 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:18.885 20:24:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.885 20:24:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:13:18.885 20:24:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:18.885 20:24:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.885 20:24:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.885 20:24:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.885 20:24:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 72156 00:13:18.885 20:24:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 72156 ']' 00:13:18.885 20:24:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 72156 00:13:18.885 20:24:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:18.885 20:24:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:18.885 20:24:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72156 00:13:18.885 killing process with pid 72156 00:13:18.885 20:24:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:18.885 20:24:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:18.885 20:24:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72156' 00:13:18.885 20:24:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 72156 00:13:18.885 20:24:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 72156 00:13:20.791 ************************************ 00:13:20.791 END TEST xnvme_rpc 00:13:20.791 ************************************ 00:13:20.791 00:13:20.791 real 0m2.778s 00:13:20.791 user 0m2.863s 00:13:20.791 sys 0m0.377s 00:13:20.791 20:24:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:20.791 20:24:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.791 20:24:04 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:20.791 20:24:04 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:20.791 20:24:04 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:20.791 20:24:04 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:20.791 ************************************ 00:13:20.791 START TEST xnvme_bdevperf 00:13:20.791 ************************************ 00:13:20.791 20:24:04 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:20.791 20:24:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:20.791 20:24:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:13:20.791 20:24:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:20.791 20:24:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:20.791 20:24:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:20.791 20:24:04 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:20.791 20:24:04 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:20.791 { 00:13:20.791 "subsystems": [ 00:13:20.791 { 00:13:20.791 "subsystem": "bdev", 00:13:20.791 "config": [ 00:13:20.791 { 00:13:20.791 "params": { 00:13:20.791 "io_mechanism": "io_uring", 00:13:20.791 "conserve_cpu": true, 00:13:20.791 "filename": "/dev/nvme0n1", 00:13:20.791 "name": "xnvme_bdev" 00:13:20.791 }, 00:13:20.791 "method": "bdev_xnvme_create" 00:13:20.791 }, 00:13:20.791 { 00:13:20.791 "method": "bdev_wait_for_examine" 00:13:20.791 } 00:13:20.791 ] 00:13:20.791 } 00:13:20.791 ] 00:13:20.791 } 00:13:20.791 [2024-12-12 20:24:04.703241] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:13:20.791 [2024-12-12 20:24:04.703356] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72224 ] 00:13:20.791 [2024-12-12 20:24:04.864927] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:20.791 [2024-12-12 20:24:04.964357] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:21.052 Running I/O for 5 seconds... 00:13:23.373 35475.00 IOPS, 138.57 MiB/s [2024-12-12T20:24:08.540Z] 35884.50 IOPS, 140.17 MiB/s [2024-12-12T20:24:09.482Z] 35495.67 IOPS, 138.65 MiB/s [2024-12-12T20:24:10.425Z] 35228.25 IOPS, 137.61 MiB/s 00:13:26.197 Latency(us) 00:13:26.197 [2024-12-12T20:24:10.425Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:26.197 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:26.197 xnvme_bdev : 5.00 34234.09 133.73 0.00 0.00 1865.21 819.20 7965.14 00:13:26.197 [2024-12-12T20:24:10.425Z] =================================================================================================================== 00:13:26.197 [2024-12-12T20:24:10.425Z] Total : 34234.09 133.73 0.00 0.00 1865.21 819.20 7965.14 00:13:27.141 20:24:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:27.141 20:24:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:27.141 20:24:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:27.141 20:24:11 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:27.141 20:24:11 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:27.141 { 00:13:27.141 "subsystems": [ 00:13:27.141 { 00:13:27.141 "subsystem": "bdev", 00:13:27.141 "config": [ 00:13:27.141 { 00:13:27.141 "params": { 00:13:27.141 "io_mechanism": "io_uring", 00:13:27.141 "conserve_cpu": true, 00:13:27.141 "filename": "/dev/nvme0n1", 00:13:27.141 "name": "xnvme_bdev" 00:13:27.141 }, 00:13:27.141 "method": "bdev_xnvme_create" 00:13:27.141 }, 00:13:27.141 { 00:13:27.141 "method": "bdev_wait_for_examine" 00:13:27.141 } 00:13:27.141 ] 00:13:27.141 } 00:13:27.141 ] 00:13:27.141 } 00:13:27.141 [2024-12-12 20:24:11.104404] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:13:27.141 [2024-12-12 20:24:11.104782] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72300 ] 00:13:27.141 [2024-12-12 20:24:11.270521] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:27.402 [2024-12-12 20:24:11.406444] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:27.660 Running I/O for 5 seconds... 00:13:29.545 32166.00 IOPS, 125.65 MiB/s [2024-12-12T20:24:14.717Z] 32078.00 IOPS, 125.30 MiB/s [2024-12-12T20:24:16.102Z] 32386.00 IOPS, 126.51 MiB/s [2024-12-12T20:24:17.048Z] 32222.75 IOPS, 125.87 MiB/s 00:13:32.820 Latency(us) 00:13:32.820 [2024-12-12T20:24:17.048Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:32.820 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:32.820 xnvme_bdev : 5.00 32377.40 126.47 0.00 0.00 1972.13 184.32 11090.71 00:13:32.820 [2024-12-12T20:24:17.048Z] =================================================================================================================== 00:13:32.820 [2024-12-12T20:24:17.048Z] Total : 32377.40 126.47 0.00 0.00 1972.13 184.32 11090.71 00:13:33.389 ************************************ 00:13:33.389 END TEST xnvme_bdevperf 00:13:33.389 ************************************ 00:13:33.389 00:13:33.389 real 0m12.918s 00:13:33.389 user 0m8.365s 00:13:33.390 sys 0m3.995s 00:13:33.390 20:24:17 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:33.390 20:24:17 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:33.390 20:24:17 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:33.390 20:24:17 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:33.390 20:24:17 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:33.390 20:24:17 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:33.650 ************************************ 00:13:33.650 START TEST xnvme_fio_plugin 00:13:33.650 ************************************ 00:13:33.650 20:24:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:33.650 20:24:17 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:33.650 20:24:17 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:13:33.650 20:24:17 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:33.650 20:24:17 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:33.650 20:24:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:33.650 20:24:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:33.650 20:24:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:33.650 20:24:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:33.650 20:24:17 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:33.650 20:24:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:33.650 20:24:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:33.650 20:24:17 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:33.650 20:24:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:33.650 20:24:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:33.650 20:24:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:33.650 20:24:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:33.650 20:24:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:33.650 20:24:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:33.650 20:24:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:33.651 20:24:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:33.651 20:24:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:33.651 20:24:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:33.651 20:24:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:33.651 { 00:13:33.651 "subsystems": [ 00:13:33.651 { 00:13:33.651 "subsystem": "bdev", 00:13:33.651 "config": [ 00:13:33.651 { 00:13:33.651 "params": { 00:13:33.651 "io_mechanism": "io_uring", 00:13:33.651 "conserve_cpu": true, 00:13:33.651 "filename": "/dev/nvme0n1", 00:13:33.651 "name": "xnvme_bdev" 00:13:33.651 }, 00:13:33.651 "method": "bdev_xnvme_create" 00:13:33.651 }, 00:13:33.651 { 00:13:33.651 "method": "bdev_wait_for_examine" 00:13:33.651 } 00:13:33.651 ] 00:13:33.651 } 00:13:33.651 ] 00:13:33.651 } 00:13:33.651 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:33.651 fio-3.35 00:13:33.651 Starting 1 thread 00:13:40.238 00:13:40.238 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72419: Thu Dec 12 20:24:23 2024 00:13:40.238 read: IOPS=34.2k, BW=134MiB/s (140MB/s)(668MiB/5002msec) 00:13:40.238 slat (usec): min=2, max=124, avg= 4.03, stdev= 2.36 00:13:40.238 clat (usec): min=898, max=6633, avg=1709.22, stdev=327.29 00:13:40.238 lat (usec): min=902, max=6636, avg=1713.25, stdev=327.95 00:13:40.238 clat percentiles (usec): 00:13:40.238 | 1.00th=[ 1139], 5.00th=[ 1254], 10.00th=[ 1336], 20.00th=[ 1434], 00:13:40.238 | 30.00th=[ 1516], 40.00th=[ 1598], 50.00th=[ 1680], 60.00th=[ 1745], 00:13:40.238 | 70.00th=[ 1844], 80.00th=[ 1958], 90.00th=[ 2114], 95.00th=[ 2278], 00:13:40.238 | 99.00th=[ 2606], 99.50th=[ 2769], 99.90th=[ 3097], 99.95th=[ 3261], 00:13:40.238 | 99.99th=[ 6456] 00:13:40.238 bw ( KiB/s): min=121344, max=151040, per=100.00%, avg=138467.56, stdev=10446.28, samples=9 00:13:40.238 iops : min=30336, max=37760, avg=34616.89, stdev=2611.57, samples=9 00:13:40.238 lat (usec) : 1000=0.08% 00:13:40.238 lat (msec) : 2=83.08%, 4=16.80%, 10=0.04% 00:13:40.238 cpu : usr=54.77%, sys=41.51%, ctx=60, majf=0, minf=762 00:13:40.238 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:13:40.238 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:40.238 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:13:40.238 issued rwts: total=171006,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:40.238 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:40.238 00:13:40.238 Run status group 0 (all jobs): 00:13:40.238 READ: bw=134MiB/s (140MB/s), 134MiB/s-134MiB/s (140MB/s-140MB/s), io=668MiB (700MB), run=5002-5002msec 00:13:40.499 ----------------------------------------------------- 00:13:40.499 Suppressions used: 00:13:40.499 count bytes template 00:13:40.499 1 11 /usr/src/fio/parse.c 00:13:40.499 1 8 libtcmalloc_minimal.so 00:13:40.499 1 904 libcrypto.so 00:13:40.499 ----------------------------------------------------- 00:13:40.499 00:13:40.499 20:24:24 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:40.499 20:24:24 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:40.499 20:24:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:40.499 20:24:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:40.500 20:24:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:40.500 20:24:24 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:40.500 20:24:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:40.500 20:24:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:40.500 20:24:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:40.500 20:24:24 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:40.500 20:24:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:40.500 20:24:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:40.500 20:24:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:40.500 20:24:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:40.500 20:24:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:40.500 20:24:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:40.500 20:24:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:40.500 20:24:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:40.500 20:24:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:40.500 20:24:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:40.500 20:24:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:40.500 { 00:13:40.500 "subsystems": [ 00:13:40.500 { 00:13:40.500 "subsystem": "bdev", 00:13:40.500 "config": [ 00:13:40.500 { 00:13:40.500 "params": { 00:13:40.500 "io_mechanism": "io_uring", 00:13:40.500 "conserve_cpu": true, 00:13:40.500 "filename": "/dev/nvme0n1", 00:13:40.500 "name": "xnvme_bdev" 00:13:40.500 }, 00:13:40.500 "method": "bdev_xnvme_create" 00:13:40.500 }, 00:13:40.500 { 00:13:40.500 "method": "bdev_wait_for_examine" 00:13:40.500 } 00:13:40.500 ] 00:13:40.500 } 00:13:40.500 ] 00:13:40.500 } 00:13:40.761 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:40.761 fio-3.35 00:13:40.761 Starting 1 thread 00:13:47.448 00:13:47.448 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72511: Thu Dec 12 20:24:30 2024 00:13:47.448 write: IOPS=31.8k, BW=124MiB/s (130MB/s)(621MiB/5001msec); 0 zone resets 00:13:47.448 slat (usec): min=2, max=158, avg= 4.06, stdev= 2.43 00:13:47.448 clat (usec): min=473, max=94069, avg=1850.22, stdev=1783.82 00:13:47.448 lat (usec): min=477, max=94073, avg=1854.28, stdev=1783.88 00:13:47.448 clat percentiles (usec): 00:13:47.448 | 1.00th=[ 1287], 5.00th=[ 1418], 10.00th=[ 1500], 20.00th=[ 1582], 00:13:47.448 | 30.00th=[ 1647], 40.00th=[ 1713], 50.00th=[ 1778], 60.00th=[ 1844], 00:13:47.448 | 70.00th=[ 1926], 80.00th=[ 2024], 90.00th=[ 2180], 95.00th=[ 2343], 00:13:47.448 | 99.00th=[ 2671], 99.50th=[ 2868], 99.90th=[ 4047], 99.95th=[ 7308], 00:13:47.448 | 99.99th=[91751] 00:13:47.448 bw ( KiB/s): min=101868, max=136120, per=99.06%, avg=125871.56, stdev=9951.79, samples=9 00:13:47.448 iops : min=25467, max=34030, avg=31467.89, stdev=2487.95, samples=9 00:13:47.448 lat (usec) : 500=0.01%, 750=0.02%, 1000=0.03% 00:13:47.448 lat (msec) : 2=77.98%, 4=21.86%, 10=0.06%, 20=0.01%, 100=0.04% 00:13:47.448 cpu : usr=57.76%, sys=37.90%, ctx=16, majf=0, minf=763 00:13:47.448 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.4%, 16=24.9%, 32=50.2%, >=64=1.6% 00:13:47.448 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:47.448 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:13:47.448 issued rwts: total=0,158871,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:47.448 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:47.448 00:13:47.448 Run status group 0 (all jobs): 00:13:47.448 WRITE: bw=124MiB/s (130MB/s), 124MiB/s-124MiB/s (130MB/s-130MB/s), io=621MiB (651MB), run=5001-5001msec 00:13:47.448 ----------------------------------------------------- 00:13:47.448 Suppressions used: 00:13:47.448 count bytes template 00:13:47.448 1 11 /usr/src/fio/parse.c 00:13:47.448 1 8 libtcmalloc_minimal.so 00:13:47.448 1 904 libcrypto.so 00:13:47.448 ----------------------------------------------------- 00:13:47.448 00:13:47.448 00:13:47.448 real 0m13.796s 00:13:47.448 user 0m8.507s 00:13:47.448 sys 0m4.558s 00:13:47.448 ************************************ 00:13:47.448 END TEST xnvme_fio_plugin 00:13:47.448 ************************************ 00:13:47.448 20:24:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:47.448 20:24:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:47.448 20:24:31 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:13:47.448 20:24:31 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:13:47.448 20:24:31 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:13:47.448 20:24:31 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:13:47.448 20:24:31 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:13:47.448 20:24:31 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:47.448 20:24:31 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:13:47.448 20:24:31 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:13:47.448 20:24:31 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:47.448 20:24:31 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:47.448 20:24:31 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:47.448 20:24:31 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:47.448 ************************************ 00:13:47.448 START TEST xnvme_rpc 00:13:47.448 ************************************ 00:13:47.448 20:24:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:47.448 20:24:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:47.448 20:24:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:47.448 20:24:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:47.448 20:24:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:47.448 20:24:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=72597 00:13:47.448 20:24:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 72597 00:13:47.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:47.448 20:24:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 72597 ']' 00:13:47.448 20:24:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:47.448 20:24:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:47.448 20:24:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:47.448 20:24:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:47.448 20:24:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.448 20:24:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:47.448 [2024-12-12 20:24:31.568156] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:13:47.448 [2024-12-12 20:24:31.568278] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72597 ] 00:13:47.709 [2024-12-12 20:24:31.732646] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:47.710 [2024-12-12 20:24:31.861471] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:48.644 20:24:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:48.644 20:24:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:48.644 20:24:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:13:48.644 20:24:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.644 20:24:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.644 xnvme_bdev 00:13:48.644 20:24:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.644 20:24:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:48.644 20:24:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:48.644 20:24:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.644 20:24:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.644 20:24:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:48.644 20:24:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.644 20:24:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:48.644 20:24:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:48.644 20:24:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:48.644 20:24:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.644 20:24:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.644 20:24:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:48.644 20:24:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.644 20:24:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:13:48.644 20:24:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:48.644 20:24:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:48.644 20:24:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:48.644 20:24:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.644 20:24:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.644 20:24:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.644 20:24:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:13:48.644 20:24:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:48.644 20:24:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:48.644 20:24:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:48.644 20:24:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.644 20:24:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.644 20:24:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.644 20:24:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:13:48.644 20:24:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:48.644 20:24:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.644 20:24:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.644 20:24:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.644 20:24:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 72597 00:13:48.644 20:24:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 72597 ']' 00:13:48.644 20:24:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 72597 00:13:48.644 20:24:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:48.644 20:24:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:48.644 20:24:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72597 00:13:48.644 killing process with pid 72597 00:13:48.644 20:24:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:48.644 20:24:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:48.644 20:24:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72597' 00:13:48.644 20:24:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 72597 00:13:48.644 20:24:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 72597 00:13:50.545 00:13:50.545 real 0m2.769s 00:13:50.545 user 0m2.889s 00:13:50.545 sys 0m0.416s 00:13:50.545 20:24:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:50.545 20:24:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:50.545 ************************************ 00:13:50.545 END TEST xnvme_rpc 00:13:50.545 ************************************ 00:13:50.545 20:24:34 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:50.545 20:24:34 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:50.545 20:24:34 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:50.545 20:24:34 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:50.545 ************************************ 00:13:50.545 START TEST xnvme_bdevperf 00:13:50.545 ************************************ 00:13:50.545 20:24:34 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:50.545 20:24:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:50.545 20:24:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:13:50.545 20:24:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:50.545 20:24:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:50.545 20:24:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:50.545 20:24:34 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:50.545 20:24:34 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:50.545 { 00:13:50.545 "subsystems": [ 00:13:50.545 { 00:13:50.545 "subsystem": "bdev", 00:13:50.545 "config": [ 00:13:50.545 { 00:13:50.545 "params": { 00:13:50.545 "io_mechanism": "io_uring_cmd", 00:13:50.545 "conserve_cpu": false, 00:13:50.545 "filename": "/dev/ng0n1", 00:13:50.545 "name": "xnvme_bdev" 00:13:50.545 }, 00:13:50.545 "method": "bdev_xnvme_create" 00:13:50.545 }, 00:13:50.545 { 00:13:50.545 "method": "bdev_wait_for_examine" 00:13:50.545 } 00:13:50.545 ] 00:13:50.545 } 00:13:50.545 ] 00:13:50.545 } 00:13:50.545 [2024-12-12 20:24:34.379548] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:13:50.545 [2024-12-12 20:24:34.379659] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72660 ] 00:13:50.545 [2024-12-12 20:24:34.538053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:50.545 [2024-12-12 20:24:34.638815] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:50.804 Running I/O for 5 seconds... 00:13:52.677 37568.00 IOPS, 146.75 MiB/s [2024-12-12T20:24:38.290Z] 37562.00 IOPS, 146.73 MiB/s [2024-12-12T20:24:39.231Z] 36109.33 IOPS, 141.05 MiB/s [2024-12-12T20:24:40.172Z] 35144.75 IOPS, 137.28 MiB/s [2024-12-12T20:24:40.172Z] 34617.00 IOPS, 135.22 MiB/s 00:13:55.944 Latency(us) 00:13:55.944 [2024-12-12T20:24:40.172Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:55.944 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:55.944 xnvme_bdev : 5.01 34592.21 135.13 0.00 0.00 1846.06 677.42 5268.09 00:13:55.944 [2024-12-12T20:24:40.172Z] =================================================================================================================== 00:13:55.944 [2024-12-12T20:24:40.172Z] Total : 34592.21 135.13 0.00 0.00 1846.06 677.42 5268.09 00:13:56.514 20:24:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:56.515 20:24:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:56.515 20:24:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:56.515 20:24:40 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:56.515 20:24:40 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:56.777 { 00:13:56.777 "subsystems": [ 00:13:56.777 { 00:13:56.777 "subsystem": "bdev", 00:13:56.777 "config": [ 00:13:56.777 { 00:13:56.777 "params": { 00:13:56.777 "io_mechanism": "io_uring_cmd", 00:13:56.777 "conserve_cpu": false, 00:13:56.777 "filename": "/dev/ng0n1", 00:13:56.777 "name": "xnvme_bdev" 00:13:56.777 }, 00:13:56.777 "method": "bdev_xnvme_create" 00:13:56.777 }, 00:13:56.777 { 00:13:56.777 "method": "bdev_wait_for_examine" 00:13:56.777 } 00:13:56.777 ] 00:13:56.777 } 00:13:56.777 ] 00:13:56.777 } 00:13:56.777 [2024-12-12 20:24:40.788754] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:13:56.777 [2024-12-12 20:24:40.789087] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72740 ] 00:13:56.777 [2024-12-12 20:24:40.946365] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:57.039 [2024-12-12 20:24:41.088677] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:57.300 Running I/O for 5 seconds... 00:13:59.617 48255.00 IOPS, 188.50 MiB/s [2024-12-12T20:24:44.411Z] 55039.50 IOPS, 215.00 MiB/s [2024-12-12T20:24:45.785Z] 56753.00 IOPS, 221.69 MiB/s [2024-12-12T20:24:46.720Z] 57916.75 IOPS, 226.24 MiB/s 00:14:02.492 Latency(us) 00:14:02.492 [2024-12-12T20:24:46.720Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:02.492 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:02.492 xnvme_bdev : 5.00 58787.26 229.64 0.00 0.00 1084.23 538.78 4537.11 00:14:02.492 [2024-12-12T20:24:46.720Z] =================================================================================================================== 00:14:02.492 [2024-12-12T20:24:46.720Z] Total : 58787.26 229.64 0.00 0.00 1084.23 538.78 4537.11 00:14:03.059 20:24:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:03.059 20:24:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:14:03.059 20:24:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:03.059 20:24:47 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:03.059 20:24:47 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:03.059 { 00:14:03.059 "subsystems": [ 00:14:03.059 { 00:14:03.059 "subsystem": "bdev", 00:14:03.059 "config": [ 00:14:03.059 { 00:14:03.059 "params": { 00:14:03.059 "io_mechanism": "io_uring_cmd", 00:14:03.059 "conserve_cpu": false, 00:14:03.059 "filename": "/dev/ng0n1", 00:14:03.059 "name": "xnvme_bdev" 00:14:03.059 }, 00:14:03.059 "method": "bdev_xnvme_create" 00:14:03.059 }, 00:14:03.059 { 00:14:03.059 "method": "bdev_wait_for_examine" 00:14:03.059 } 00:14:03.059 ] 00:14:03.059 } 00:14:03.059 ] 00:14:03.059 } 00:14:03.059 [2024-12-12 20:24:47.213037] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:14:03.059 [2024-12-12 20:24:47.213198] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72814 ] 00:14:03.318 [2024-12-12 20:24:47.381323] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:03.318 [2024-12-12 20:24:47.479620] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:03.578 Running I/O for 5 seconds... 00:14:05.889 97152.00 IOPS, 379.50 MiB/s [2024-12-12T20:24:50.765Z] 95840.00 IOPS, 374.38 MiB/s [2024-12-12T20:24:52.138Z] 95466.67 IOPS, 372.92 MiB/s [2024-12-12T20:24:53.070Z] 95776.00 IOPS, 374.12 MiB/s 00:14:08.843 Latency(us) 00:14:08.843 [2024-12-12T20:24:53.071Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:08.843 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:14:08.843 xnvme_bdev : 5.00 95477.73 372.96 0.00 0.00 666.93 444.26 2306.36 00:14:08.843 [2024-12-12T20:24:53.071Z] =================================================================================================================== 00:14:08.843 [2024-12-12T20:24:53.071Z] Total : 95477.73 372.96 0.00 0.00 666.93 444.26 2306.36 00:14:09.409 20:24:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:09.409 20:24:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:09.409 20:24:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:14:09.409 20:24:53 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:09.409 20:24:53 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:09.409 { 00:14:09.409 "subsystems": [ 00:14:09.409 { 00:14:09.409 "subsystem": "bdev", 00:14:09.409 "config": [ 00:14:09.409 { 00:14:09.409 "params": { 00:14:09.409 "io_mechanism": "io_uring_cmd", 00:14:09.409 "conserve_cpu": false, 00:14:09.409 "filename": "/dev/ng0n1", 00:14:09.409 "name": "xnvme_bdev" 00:14:09.409 }, 00:14:09.409 "method": "bdev_xnvme_create" 00:14:09.409 }, 00:14:09.409 { 00:14:09.409 "method": "bdev_wait_for_examine" 00:14:09.409 } 00:14:09.409 ] 00:14:09.409 } 00:14:09.409 ] 00:14:09.409 } 00:14:09.409 [2024-12-12 20:24:53.496429] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:14:09.409 [2024-12-12 20:24:53.496543] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72889 ] 00:14:09.667 [2024-12-12 20:24:53.657008] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:09.667 [2024-12-12 20:24:53.757108] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:09.926 Running I/O for 5 seconds... 00:14:11.805 474.00 IOPS, 1.85 MiB/s [2024-12-12T20:24:57.418Z] 581.50 IOPS, 2.27 MiB/s [2024-12-12T20:24:58.357Z] 669.67 IOPS, 2.62 MiB/s [2024-12-12T20:24:59.297Z] 712.75 IOPS, 2.78 MiB/s [2024-12-12T20:24:59.297Z] 718.80 IOPS, 2.81 MiB/s 00:14:15.069 Latency(us) 00:14:15.069 [2024-12-12T20:24:59.297Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:15.069 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:14:15.069 xnvme_bdev : 5.07 721.89 2.82 0.00 0.00 88203.90 103.19 271016.57 00:14:15.069 [2024-12-12T20:24:59.297Z] =================================================================================================================== 00:14:15.069 [2024-12-12T20:24:59.297Z] Total : 721.89 2.82 0.00 0.00 88203.90 103.19 271016.57 00:14:15.639 00:14:15.639 real 0m25.527s 00:14:15.639 user 0m14.343s 00:14:15.639 sys 0m10.761s 00:14:15.639 20:24:59 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:15.639 20:24:59 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:15.639 ************************************ 00:14:15.639 END TEST xnvme_bdevperf 00:14:15.639 ************************************ 00:14:15.900 20:24:59 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:15.901 20:24:59 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:15.901 20:24:59 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:15.901 20:24:59 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:15.901 ************************************ 00:14:15.901 START TEST xnvme_fio_plugin 00:14:15.901 ************************************ 00:14:15.901 20:24:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:15.901 20:24:59 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:15.901 20:24:59 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:14:15.901 20:24:59 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:15.901 20:24:59 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:15.901 20:24:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:15.901 20:24:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:15.901 20:24:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:15.901 20:24:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:15.901 20:24:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:15.901 20:24:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:15.901 20:24:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:15.901 20:24:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:15.901 20:24:59 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:15.901 20:24:59 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:15.901 20:24:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:15.901 20:24:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:15.901 20:24:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:15.901 20:24:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:15.901 20:24:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:15.901 20:24:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:15.901 20:24:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:15.901 20:24:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:15.901 20:24:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:15.901 { 00:14:15.901 "subsystems": [ 00:14:15.901 { 00:14:15.901 "subsystem": "bdev", 00:14:15.901 "config": [ 00:14:15.901 { 00:14:15.901 "params": { 00:14:15.901 "io_mechanism": "io_uring_cmd", 00:14:15.901 "conserve_cpu": false, 00:14:15.901 "filename": "/dev/ng0n1", 00:14:15.901 "name": "xnvme_bdev" 00:14:15.901 }, 00:14:15.901 "method": "bdev_xnvme_create" 00:14:15.901 }, 00:14:15.901 { 00:14:15.901 "method": "bdev_wait_for_examine" 00:14:15.901 } 00:14:15.901 ] 00:14:15.901 } 00:14:15.901 ] 00:14:15.901 } 00:14:15.901 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:15.901 fio-3.35 00:14:15.901 Starting 1 thread 00:14:22.482 00:14:22.483 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73002: Thu Dec 12 20:25:05 2024 00:14:22.483 read: IOPS=59.2k, BW=231MiB/s (243MB/s)(1157MiB/5001msec) 00:14:22.483 slat (usec): min=2, max=413, avg= 3.67, stdev= 1.80 00:14:22.483 clat (usec): min=538, max=3268, avg=938.09, stdev=250.91 00:14:22.483 lat (usec): min=540, max=3328, avg=941.77, stdev=251.30 00:14:22.483 clat percentiles (usec): 00:14:22.483 | 1.00th=[ 644], 5.00th=[ 676], 10.00th=[ 709], 20.00th=[ 750], 00:14:22.483 | 30.00th=[ 791], 40.00th=[ 832], 50.00th=[ 873], 60.00th=[ 914], 00:14:22.483 | 70.00th=[ 988], 80.00th=[ 1074], 90.00th=[ 1254], 95.00th=[ 1467], 00:14:22.483 | 99.00th=[ 1844], 99.50th=[ 1975], 99.90th=[ 2343], 99.95th=[ 2606], 00:14:22.483 | 99.99th=[ 3130] 00:14:22.483 bw ( KiB/s): min=178688, max=261632, per=100.00%, avg=237517.33, stdev=25735.15, samples=9 00:14:22.483 iops : min=44672, max=65408, avg=59379.33, stdev=6433.79, samples=9 00:14:22.483 lat (usec) : 750=19.82%, 1000=51.74% 00:14:22.483 lat (msec) : 2=27.99%, 4=0.45% 00:14:22.483 cpu : usr=40.80%, sys=58.28%, ctx=41, majf=0, minf=762 00:14:22.483 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:14:22.483 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:22.483 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:14:22.483 issued rwts: total=296256,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:22.483 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:22.483 00:14:22.483 Run status group 0 (all jobs): 00:14:22.483 READ: bw=231MiB/s (243MB/s), 231MiB/s-231MiB/s (243MB/s-243MB/s), io=1157MiB (1213MB), run=5001-5001msec 00:14:22.483 ----------------------------------------------------- 00:14:22.483 Suppressions used: 00:14:22.483 count bytes template 00:14:22.483 1 11 /usr/src/fio/parse.c 00:14:22.483 1 8 libtcmalloc_minimal.so 00:14:22.483 1 904 libcrypto.so 00:14:22.483 ----------------------------------------------------- 00:14:22.483 00:14:22.483 20:25:06 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:22.483 20:25:06 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:22.483 20:25:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:22.483 20:25:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:22.483 20:25:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:22.483 20:25:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:22.483 20:25:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:22.483 20:25:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:22.483 20:25:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:22.483 20:25:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:22.483 20:25:06 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:22.483 20:25:06 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:22.483 20:25:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:22.483 20:25:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:22.483 20:25:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:22.483 20:25:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:22.483 20:25:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:22.483 20:25:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:22.483 20:25:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:22.483 20:25:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:22.483 20:25:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:22.483 { 00:14:22.483 "subsystems": [ 00:14:22.483 { 00:14:22.483 "subsystem": "bdev", 00:14:22.483 "config": [ 00:14:22.483 { 00:14:22.483 "params": { 00:14:22.483 "io_mechanism": "io_uring_cmd", 00:14:22.483 "conserve_cpu": false, 00:14:22.483 "filename": "/dev/ng0n1", 00:14:22.483 "name": "xnvme_bdev" 00:14:22.483 }, 00:14:22.483 "method": "bdev_xnvme_create" 00:14:22.483 }, 00:14:22.483 { 00:14:22.483 "method": "bdev_wait_for_examine" 00:14:22.483 } 00:14:22.483 ] 00:14:22.483 } 00:14:22.483 ] 00:14:22.483 } 00:14:22.744 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:22.744 fio-3.35 00:14:22.744 Starting 1 thread 00:14:29.333 00:14:29.333 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73094: Thu Dec 12 20:25:12 2024 00:14:29.333 write: IOPS=42.3k, BW=165MiB/s (173MB/s)(827MiB/5004msec); 0 zone resets 00:14:29.333 slat (usec): min=2, max=505, avg= 4.01, stdev= 2.34 00:14:29.333 clat (usec): min=50, max=14986, avg=1376.65, stdev=1140.87 00:14:29.333 lat (usec): min=55, max=15016, avg=1380.66, stdev=1141.04 00:14:29.333 clat percentiles (usec): 00:14:29.333 | 1.00th=[ 289], 5.00th=[ 586], 10.00th=[ 717], 20.00th=[ 832], 00:14:29.333 | 30.00th=[ 914], 40.00th=[ 996], 50.00th=[ 1106], 60.00th=[ 1254], 00:14:29.333 | 70.00th=[ 1418], 80.00th=[ 1598], 90.00th=[ 1942], 95.00th=[ 2868], 00:14:29.333 | 99.00th=[ 7504], 99.50th=[ 8848], 99.90th=[10945], 99.95th=[11600], 00:14:29.333 | 99.99th=[13173] 00:14:29.333 bw ( KiB/s): min=138528, max=213312, per=100.00%, avg=170736.89, stdev=28411.13, samples=9 00:14:29.333 iops : min=34632, max=53328, avg=42684.22, stdev=7102.78, samples=9 00:14:29.333 lat (usec) : 100=0.08%, 250=0.63%, 500=2.73%, 750=9.16%, 1000=27.53% 00:14:29.333 lat (msec) : 2=50.77%, 4=5.98%, 10=2.90%, 20=0.22% 00:14:29.333 cpu : usr=36.66%, sys=62.24%, ctx=14, majf=0, minf=763 00:14:29.333 IO depths : 1=1.1%, 2=2.2%, 4=4.5%, 8=9.5%, 16=21.6%, 32=58.5%, >=64=2.7% 00:14:29.333 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:29.333 complete : 0=0.0%, 4=97.9%, 8=0.2%, 16=0.2%, 32=0.3%, 64=1.4%, >=64=0.0% 00:14:29.333 issued rwts: total=0,211686,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:29.333 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:29.333 00:14:29.333 Run status group 0 (all jobs): 00:14:29.333 WRITE: bw=165MiB/s (173MB/s), 165MiB/s-165MiB/s (173MB/s-173MB/s), io=827MiB (867MB), run=5004-5004msec 00:14:29.333 ----------------------------------------------------- 00:14:29.333 Suppressions used: 00:14:29.333 count bytes template 00:14:29.333 1 11 /usr/src/fio/parse.c 00:14:29.333 1 8 libtcmalloc_minimal.so 00:14:29.333 1 904 libcrypto.so 00:14:29.333 ----------------------------------------------------- 00:14:29.333 00:14:29.333 00:14:29.333 real 0m13.641s 00:14:29.333 user 0m6.608s 00:14:29.333 sys 0m6.600s 00:14:29.333 20:25:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:29.333 ************************************ 00:14:29.333 END TEST xnvme_fio_plugin 00:14:29.333 ************************************ 00:14:29.333 20:25:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:29.592 20:25:13 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:14:29.592 20:25:13 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:14:29.592 20:25:13 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:14:29.592 20:25:13 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:14:29.592 20:25:13 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:29.592 20:25:13 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:29.592 20:25:13 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:29.592 ************************************ 00:14:29.592 START TEST xnvme_rpc 00:14:29.592 ************************************ 00:14:29.592 20:25:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:14:29.592 20:25:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:14:29.592 20:25:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:14:29.592 20:25:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:14:29.592 20:25:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:14:29.592 20:25:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=73179 00:14:29.592 20:25:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 73179 00:14:29.592 20:25:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 73179 ']' 00:14:29.592 20:25:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:29.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:29.592 20:25:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:29.592 20:25:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:29.592 20:25:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:29.592 20:25:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.592 20:25:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:29.592 [2024-12-12 20:25:13.675199] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:14:29.592 [2024-12-12 20:25:13.675877] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73179 ] 00:14:29.850 [2024-12-12 20:25:13.837792] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:29.850 [2024-12-12 20:25:13.946579] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:30.422 20:25:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:30.422 20:25:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:30.422 20:25:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:14:30.422 20:25:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.422 20:25:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:30.422 xnvme_bdev 00:14:30.422 20:25:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.422 20:25:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:14:30.422 20:25:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:30.422 20:25:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.422 20:25:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:30.422 20:25:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:14:30.422 20:25:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.422 20:25:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:14:30.422 20:25:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:14:30.422 20:25:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:14:30.422 20:25:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:30.422 20:25:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.422 20:25:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:30.684 20:25:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.684 20:25:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:14:30.684 20:25:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:14:30.684 20:25:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:14:30.684 20:25:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:30.684 20:25:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.684 20:25:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:30.684 20:25:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.684 20:25:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:14:30.684 20:25:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:14:30.684 20:25:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:30.684 20:25:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.684 20:25:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:30.684 20:25:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:14:30.684 20:25:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.684 20:25:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:14:30.684 20:25:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:14:30.684 20:25:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.684 20:25:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:30.684 20:25:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.684 20:25:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 73179 00:14:30.684 20:25:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 73179 ']' 00:14:30.684 20:25:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 73179 00:14:30.684 20:25:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:30.684 20:25:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:30.684 20:25:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73179 00:14:30.684 killing process with pid 73179 00:14:30.684 20:25:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:30.684 20:25:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:30.684 20:25:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73179' 00:14:30.684 20:25:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 73179 00:14:30.684 20:25:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 73179 00:14:32.598 ************************************ 00:14:32.598 END TEST xnvme_rpc 00:14:32.598 ************************************ 00:14:32.598 00:14:32.598 real 0m2.869s 00:14:32.598 user 0m2.996s 00:14:32.598 sys 0m0.414s 00:14:32.598 20:25:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:32.598 20:25:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:32.598 20:25:16 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:32.598 20:25:16 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:32.598 20:25:16 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:32.598 20:25:16 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:32.598 ************************************ 00:14:32.598 START TEST xnvme_bdevperf 00:14:32.598 ************************************ 00:14:32.598 20:25:16 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:32.598 20:25:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:32.598 20:25:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:14:32.598 20:25:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:32.598 20:25:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:32.598 20:25:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:32.598 20:25:16 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:32.598 20:25:16 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:32.598 { 00:14:32.598 "subsystems": [ 00:14:32.598 { 00:14:32.598 "subsystem": "bdev", 00:14:32.598 "config": [ 00:14:32.598 { 00:14:32.598 "params": { 00:14:32.598 "io_mechanism": "io_uring_cmd", 00:14:32.598 "conserve_cpu": true, 00:14:32.598 "filename": "/dev/ng0n1", 00:14:32.598 "name": "xnvme_bdev" 00:14:32.598 }, 00:14:32.598 "method": "bdev_xnvme_create" 00:14:32.598 }, 00:14:32.598 { 00:14:32.598 "method": "bdev_wait_for_examine" 00:14:32.598 } 00:14:32.598 ] 00:14:32.598 } 00:14:32.598 ] 00:14:32.598 } 00:14:32.598 [2024-12-12 20:25:16.595137] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:14:32.598 [2024-12-12 20:25:16.595283] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73253 ] 00:14:32.598 [2024-12-12 20:25:16.761713] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:32.857 [2024-12-12 20:25:16.873948] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:33.115 Running I/O for 5 seconds... 00:14:34.985 40384.00 IOPS, 157.75 MiB/s [2024-12-12T20:25:20.147Z] 40160.00 IOPS, 156.88 MiB/s [2024-12-12T20:25:21.532Z] 39530.67 IOPS, 154.42 MiB/s [2024-12-12T20:25:22.475Z] 39760.00 IOPS, 155.31 MiB/s 00:14:38.247 Latency(us) 00:14:38.247 [2024-12-12T20:25:22.475Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:38.247 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:38.247 xnvme_bdev : 5.00 39073.15 152.63 0.00 0.00 1633.66 743.58 7360.20 00:14:38.247 [2024-12-12T20:25:22.475Z] =================================================================================================================== 00:14:38.247 [2024-12-12T20:25:22.475Z] Total : 39073.15 152.63 0.00 0.00 1633.66 743.58 7360.20 00:14:38.818 20:25:23 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:38.818 20:25:23 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:38.818 20:25:23 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:38.818 20:25:23 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:38.818 20:25:23 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:39.079 { 00:14:39.079 "subsystems": [ 00:14:39.079 { 00:14:39.079 "subsystem": "bdev", 00:14:39.079 "config": [ 00:14:39.079 { 00:14:39.079 "params": { 00:14:39.079 "io_mechanism": "io_uring_cmd", 00:14:39.079 "conserve_cpu": true, 00:14:39.079 "filename": "/dev/ng0n1", 00:14:39.079 "name": "xnvme_bdev" 00:14:39.079 }, 00:14:39.079 "method": "bdev_xnvme_create" 00:14:39.079 }, 00:14:39.079 { 00:14:39.079 "method": "bdev_wait_for_examine" 00:14:39.079 } 00:14:39.079 ] 00:14:39.079 } 00:14:39.079 ] 00:14:39.079 } 00:14:39.079 [2024-12-12 20:25:23.125518] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:14:39.079 [2024-12-12 20:25:23.125714] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73324 ] 00:14:39.079 [2024-12-12 20:25:23.305722] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:39.337 [2024-12-12 20:25:23.437176] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:39.597 Running I/O for 5 seconds... 00:14:41.505 41676.00 IOPS, 162.80 MiB/s [2024-12-12T20:25:27.137Z] 38350.50 IOPS, 149.81 MiB/s [2024-12-12T20:25:27.717Z] 37083.00 IOPS, 144.86 MiB/s [2024-12-12T20:25:29.100Z] 36461.00 IOPS, 142.43 MiB/s [2024-12-12T20:25:29.100Z] 36990.80 IOPS, 144.50 MiB/s 00:14:44.872 Latency(us) 00:14:44.872 [2024-12-12T20:25:29.100Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:44.872 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:44.872 xnvme_bdev : 5.00 36968.10 144.41 0.00 0.00 1726.41 718.38 6856.07 00:14:44.872 [2024-12-12T20:25:29.100Z] =================================================================================================================== 00:14:44.872 [2024-12-12T20:25:29.100Z] Total : 36968.10 144.41 0.00 0.00 1726.41 718.38 6856.07 00:14:45.443 20:25:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:45.443 20:25:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:14:45.443 20:25:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:45.443 20:25:29 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:45.443 20:25:29 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:45.443 { 00:14:45.443 "subsystems": [ 00:14:45.443 { 00:14:45.443 "subsystem": "bdev", 00:14:45.443 "config": [ 00:14:45.443 { 00:14:45.443 "params": { 00:14:45.444 "io_mechanism": "io_uring_cmd", 00:14:45.444 "conserve_cpu": true, 00:14:45.444 "filename": "/dev/ng0n1", 00:14:45.444 "name": "xnvme_bdev" 00:14:45.444 }, 00:14:45.444 "method": "bdev_xnvme_create" 00:14:45.444 }, 00:14:45.444 { 00:14:45.444 "method": "bdev_wait_for_examine" 00:14:45.444 } 00:14:45.444 ] 00:14:45.444 } 00:14:45.444 ] 00:14:45.444 } 00:14:45.444 [2024-12-12 20:25:29.588900] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:14:45.444 [2024-12-12 20:25:29.589256] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73396 ] 00:14:45.705 [2024-12-12 20:25:29.750307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:45.706 [2024-12-12 20:25:29.888187] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:45.967 Running I/O for 5 seconds... 00:14:48.399 78272.00 IOPS, 305.75 MiB/s [2024-12-12T20:25:33.199Z] 78304.00 IOPS, 305.88 MiB/s [2024-12-12T20:25:34.585Z] 78186.67 IOPS, 305.42 MiB/s [2024-12-12T20:25:35.528Z] 78368.00 IOPS, 306.12 MiB/s 00:14:51.300 Latency(us) 00:14:51.300 [2024-12-12T20:25:35.528Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:51.300 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:14:51.300 xnvme_bdev : 5.00 80539.04 314.61 0.00 0.00 791.18 431.66 2936.52 00:14:51.300 [2024-12-12T20:25:35.528Z] =================================================================================================================== 00:14:51.300 [2024-12-12T20:25:35.528Z] Total : 80539.04 314.61 0.00 0.00 791.18 431.66 2936.52 00:14:51.872 20:25:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:51.872 20:25:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:14:51.872 20:25:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:51.872 20:25:35 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:51.872 20:25:35 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:51.872 { 00:14:51.872 "subsystems": [ 00:14:51.872 { 00:14:51.872 "subsystem": "bdev", 00:14:51.872 "config": [ 00:14:51.872 { 00:14:51.872 "params": { 00:14:51.873 "io_mechanism": "io_uring_cmd", 00:14:51.873 "conserve_cpu": true, 00:14:51.873 "filename": "/dev/ng0n1", 00:14:51.873 "name": "xnvme_bdev" 00:14:51.873 }, 00:14:51.873 "method": "bdev_xnvme_create" 00:14:51.873 }, 00:14:51.873 { 00:14:51.873 "method": "bdev_wait_for_examine" 00:14:51.873 } 00:14:51.873 ] 00:14:51.873 } 00:14:51.873 ] 00:14:51.873 } 00:14:51.873 [2024-12-12 20:25:36.026389] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:14:51.873 [2024-12-12 20:25:36.026519] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73478 ] 00:14:52.134 [2024-12-12 20:25:36.186346] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:52.134 [2024-12-12 20:25:36.287550] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:52.395 Running I/O for 5 seconds... 00:14:54.714 61497.00 IOPS, 240.22 MiB/s [2024-12-12T20:25:39.906Z] 64500.50 IOPS, 251.96 MiB/s [2024-12-12T20:25:40.840Z] 62796.33 IOPS, 245.30 MiB/s [2024-12-12T20:25:41.775Z] 56629.00 IOPS, 221.21 MiB/s [2024-12-12T20:25:41.775Z] 46771.60 IOPS, 182.70 MiB/s 00:14:57.547 Latency(us) 00:14:57.547 [2024-12-12T20:25:41.775Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:57.547 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:14:57.547 xnvme_bdev : 5.02 46573.83 181.93 0.00 0.00 1366.15 47.46 56865.08 00:14:57.547 [2024-12-12T20:25:41.775Z] =================================================================================================================== 00:14:57.547 [2024-12-12T20:25:41.775Z] Total : 46573.83 181.93 0.00 0.00 1366.15 47.46 56865.08 00:14:58.112 00:14:58.112 real 0m25.797s 00:14:58.112 user 0m16.972s 00:14:58.112 sys 0m6.961s 00:14:58.113 20:25:42 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:58.113 20:25:42 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:58.113 ************************************ 00:14:58.113 END TEST xnvme_bdevperf 00:14:58.113 ************************************ 00:14:58.371 20:25:42 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:58.371 20:25:42 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:58.371 20:25:42 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:58.371 20:25:42 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:58.371 ************************************ 00:14:58.371 START TEST xnvme_fio_plugin 00:14:58.371 ************************************ 00:14:58.371 20:25:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:58.371 20:25:42 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:58.371 20:25:42 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:14:58.371 20:25:42 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:58.371 20:25:42 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:58.371 20:25:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:58.371 20:25:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:58.371 20:25:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:58.371 20:25:42 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:58.371 20:25:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:58.371 20:25:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:58.371 20:25:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:58.371 20:25:42 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:58.371 20:25:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:58.371 20:25:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:58.371 20:25:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:58.371 20:25:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:58.371 20:25:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:58.371 20:25:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:58.371 20:25:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:58.371 20:25:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:58.371 20:25:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:58.371 20:25:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:58.371 20:25:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:58.371 { 00:14:58.371 "subsystems": [ 00:14:58.371 { 00:14:58.371 "subsystem": "bdev", 00:14:58.371 "config": [ 00:14:58.371 { 00:14:58.371 "params": { 00:14:58.371 "io_mechanism": "io_uring_cmd", 00:14:58.371 "conserve_cpu": true, 00:14:58.371 "filename": "/dev/ng0n1", 00:14:58.371 "name": "xnvme_bdev" 00:14:58.371 }, 00:14:58.371 "method": "bdev_xnvme_create" 00:14:58.371 }, 00:14:58.371 { 00:14:58.371 "method": "bdev_wait_for_examine" 00:14:58.371 } 00:14:58.371 ] 00:14:58.371 } 00:14:58.371 ] 00:14:58.371 } 00:14:58.371 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:58.371 fio-3.35 00:14:58.371 Starting 1 thread 00:15:04.932 00:15:04.932 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73595: Thu Dec 12 20:25:48 2024 00:15:04.932 read: IOPS=64.2k, BW=251MiB/s (263MB/s)(1254MiB/5001msec) 00:15:04.932 slat (usec): min=2, max=207, avg= 3.56, stdev= 1.37 00:15:04.932 clat (usec): min=434, max=2968, avg=860.64, stdev=155.87 00:15:04.932 lat (usec): min=437, max=2999, avg=864.20, stdev=156.08 00:15:04.932 clat percentiles (usec): 00:15:04.932 | 1.00th=[ 652], 5.00th=[ 676], 10.00th=[ 701], 20.00th=[ 734], 00:15:04.932 | 30.00th=[ 766], 40.00th=[ 799], 50.00th=[ 832], 60.00th=[ 865], 00:15:04.932 | 70.00th=[ 898], 80.00th=[ 971], 90.00th=[ 1074], 95.00th=[ 1156], 00:15:04.932 | 99.00th=[ 1336], 99.50th=[ 1418], 99.90th=[ 1762], 99.95th=[ 2114], 00:15:04.932 | 99.99th=[ 2704] 00:15:04.932 bw ( KiB/s): min=242688, max=272384, per=99.81%, avg=256226.56, stdev=9720.70, samples=9 00:15:04.932 iops : min=60672, max=68096, avg=64056.56, stdev=2430.17, samples=9 00:15:04.932 lat (usec) : 500=0.02%, 750=24.91%, 1000=58.11% 00:15:04.932 lat (msec) : 2=16.92%, 4=0.06% 00:15:04.932 cpu : usr=45.12%, sys=52.36%, ctx=70, majf=0, minf=762 00:15:04.932 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=24.9%, 32=50.1%, >=64=1.6% 00:15:04.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:04.932 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:15:04.932 issued rwts: total=320960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:04.932 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:04.932 00:15:04.932 Run status group 0 (all jobs): 00:15:04.932 READ: bw=251MiB/s (263MB/s), 251MiB/s-251MiB/s (263MB/s-263MB/s), io=1254MiB (1315MB), run=5001-5001msec 00:15:04.932 ----------------------------------------------------- 00:15:04.932 Suppressions used: 00:15:04.932 count bytes template 00:15:04.932 1 11 /usr/src/fio/parse.c 00:15:04.932 1 8 libtcmalloc_minimal.so 00:15:04.932 1 904 libcrypto.so 00:15:04.932 ----------------------------------------------------- 00:15:04.932 00:15:04.932 20:25:49 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:04.932 20:25:49 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:04.932 20:25:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:04.932 20:25:49 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:04.932 20:25:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:04.932 20:25:49 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:04.932 20:25:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:04.932 20:25:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:04.932 20:25:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:04.932 20:25:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:04.932 20:25:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:04.932 20:25:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:04.932 20:25:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:04.932 20:25:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:04.932 20:25:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:04.932 20:25:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:04.932 20:25:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:04.932 20:25:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:04.932 20:25:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:04.932 20:25:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:04.932 20:25:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:04.932 { 00:15:04.932 "subsystems": [ 00:15:04.932 { 00:15:04.932 "subsystem": "bdev", 00:15:04.932 "config": [ 00:15:04.932 { 00:15:04.932 "params": { 00:15:04.932 "io_mechanism": "io_uring_cmd", 00:15:04.932 "conserve_cpu": true, 00:15:04.932 "filename": "/dev/ng0n1", 00:15:04.932 "name": "xnvme_bdev" 00:15:04.932 }, 00:15:04.932 "method": "bdev_xnvme_create" 00:15:04.932 }, 00:15:04.932 { 00:15:04.932 "method": "bdev_wait_for_examine" 00:15:04.932 } 00:15:04.932 ] 00:15:04.932 } 00:15:04.932 ] 00:15:04.932 } 00:15:05.191 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:05.191 fio-3.35 00:15:05.191 Starting 1 thread 00:15:11.765 00:15:11.765 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73681: Thu Dec 12 20:25:54 2024 00:15:11.765 write: IOPS=54.6k, BW=213MiB/s (224MB/s)(1066MiB/5001msec); 0 zone resets 00:15:11.765 slat (usec): min=2, max=927, avg= 4.24, stdev= 2.79 00:15:11.765 clat (usec): min=583, max=5532, avg=1004.20, stdev=277.65 00:15:11.765 lat (usec): min=586, max=5541, avg=1008.44, stdev=278.57 00:15:11.765 clat percentiles (usec): 00:15:11.765 | 1.00th=[ 652], 5.00th=[ 693], 10.00th=[ 725], 20.00th=[ 775], 00:15:11.765 | 30.00th=[ 824], 40.00th=[ 873], 50.00th=[ 930], 60.00th=[ 1004], 00:15:11.765 | 70.00th=[ 1090], 80.00th=[ 1205], 90.00th=[ 1385], 95.00th=[ 1549], 00:15:11.765 | 99.00th=[ 1909], 99.50th=[ 2040], 99.90th=[ 2507], 99.95th=[ 2638], 00:15:11.766 | 99.99th=[ 2933] 00:15:11.766 bw ( KiB/s): min=170480, max=245248, per=100.00%, avg=223784.89, stdev=26437.31, samples=9 00:15:11.766 iops : min=42620, max=61312, avg=55946.22, stdev=6609.29, samples=9 00:15:11.766 lat (usec) : 750=14.86%, 1000=44.39% 00:15:11.766 lat (msec) : 2=40.11%, 4=0.64%, 10=0.01% 00:15:11.766 cpu : usr=46.80%, sys=50.36%, ctx=13, majf=0, minf=763 00:15:11.766 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:15:11.766 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:11.766 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:15:11.766 issued rwts: total=0,273003,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:11.766 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:11.766 00:15:11.766 Run status group 0 (all jobs): 00:15:11.766 WRITE: bw=213MiB/s (224MB/s), 213MiB/s-213MiB/s (224MB/s-224MB/s), io=1066MiB (1118MB), run=5001-5001msec 00:15:11.766 ----------------------------------------------------- 00:15:11.766 Suppressions used: 00:15:11.766 count bytes template 00:15:11.766 1 11 /usr/src/fio/parse.c 00:15:11.766 1 8 libtcmalloc_minimal.so 00:15:11.766 1 904 libcrypto.so 00:15:11.766 ----------------------------------------------------- 00:15:11.766 00:15:11.766 00:15:11.766 real 0m13.555s 00:15:11.766 user 0m7.337s 00:15:11.766 sys 0m5.615s 00:15:11.766 ************************************ 00:15:11.766 END TEST xnvme_fio_plugin 00:15:11.766 ************************************ 00:15:11.766 20:25:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:11.766 20:25:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:11.766 20:25:55 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 73179 00:15:11.766 20:25:55 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 73179 ']' 00:15:11.766 20:25:55 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 73179 00:15:11.766 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (73179) - No such process 00:15:11.766 20:25:55 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 73179 is not found' 00:15:11.766 Process with pid 73179 is not found 00:15:11.766 ************************************ 00:15:11.766 END TEST nvme_xnvme 00:15:11.766 ************************************ 00:15:11.766 00:15:11.766 real 3m29.999s 00:15:11.766 user 1m53.683s 00:15:11.766 sys 1m19.533s 00:15:11.766 20:25:55 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:11.766 20:25:55 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:12.024 20:25:56 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:15:12.024 20:25:56 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:12.024 20:25:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:12.024 20:25:56 -- common/autotest_common.sh@10 -- # set +x 00:15:12.024 ************************************ 00:15:12.024 START TEST blockdev_xnvme 00:15:12.024 ************************************ 00:15:12.024 20:25:56 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:15:12.024 * Looking for test storage... 00:15:12.024 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:15:12.024 20:25:56 blockdev_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:12.024 20:25:56 blockdev_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:12.024 20:25:56 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:15:12.024 20:25:56 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:12.024 20:25:56 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:12.024 20:25:56 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:12.024 20:25:56 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:12.024 20:25:56 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:15:12.024 20:25:56 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:15:12.024 20:25:56 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:15:12.024 20:25:56 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:15:12.024 20:25:56 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:15:12.024 20:25:56 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:15:12.024 20:25:56 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:15:12.024 20:25:56 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:12.024 20:25:56 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:15:12.024 20:25:56 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:15:12.024 20:25:56 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:12.024 20:25:56 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:12.024 20:25:56 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:15:12.024 20:25:56 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:15:12.024 20:25:56 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:12.024 20:25:56 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:15:12.024 20:25:56 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:15:12.024 20:25:56 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:15:12.024 20:25:56 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:15:12.024 20:25:56 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:12.024 20:25:56 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:15:12.024 20:25:56 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:15:12.024 20:25:56 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:12.024 20:25:56 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:12.024 20:25:56 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:15:12.024 20:25:56 blockdev_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:12.024 20:25:56 blockdev_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:12.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:12.024 --rc genhtml_branch_coverage=1 00:15:12.024 --rc genhtml_function_coverage=1 00:15:12.024 --rc genhtml_legend=1 00:15:12.024 --rc geninfo_all_blocks=1 00:15:12.024 --rc geninfo_unexecuted_blocks=1 00:15:12.024 00:15:12.024 ' 00:15:12.024 20:25:56 blockdev_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:12.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:12.024 --rc genhtml_branch_coverage=1 00:15:12.024 --rc genhtml_function_coverage=1 00:15:12.024 --rc genhtml_legend=1 00:15:12.024 --rc geninfo_all_blocks=1 00:15:12.024 --rc geninfo_unexecuted_blocks=1 00:15:12.024 00:15:12.024 ' 00:15:12.024 20:25:56 blockdev_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:12.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:12.024 --rc genhtml_branch_coverage=1 00:15:12.024 --rc genhtml_function_coverage=1 00:15:12.024 --rc genhtml_legend=1 00:15:12.024 --rc geninfo_all_blocks=1 00:15:12.024 --rc geninfo_unexecuted_blocks=1 00:15:12.024 00:15:12.024 ' 00:15:12.024 20:25:56 blockdev_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:12.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:12.024 --rc genhtml_branch_coverage=1 00:15:12.024 --rc genhtml_function_coverage=1 00:15:12.024 --rc genhtml_legend=1 00:15:12.024 --rc geninfo_all_blocks=1 00:15:12.024 --rc geninfo_unexecuted_blocks=1 00:15:12.024 00:15:12.024 ' 00:15:12.024 20:25:56 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:15:12.024 20:25:56 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:15:12.024 20:25:56 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:15:12.024 20:25:56 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:12.024 20:25:56 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:15:12.024 20:25:56 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:15:12.024 20:25:56 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:15:12.024 20:25:56 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:15:12.024 20:25:56 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:15:12.024 20:25:56 blockdev_xnvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:15:12.024 20:25:56 blockdev_xnvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:15:12.024 20:25:56 blockdev_xnvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:15:12.024 20:25:56 blockdev_xnvme -- bdev/blockdev.sh@711 -- # uname -s 00:15:12.024 20:25:56 blockdev_xnvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:15:12.024 20:25:56 blockdev_xnvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:15:12.024 20:25:56 blockdev_xnvme -- bdev/blockdev.sh@719 -- # test_type=xnvme 00:15:12.024 20:25:56 blockdev_xnvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:15:12.024 20:25:56 blockdev_xnvme -- bdev/blockdev.sh@721 -- # dek= 00:15:12.025 20:25:56 blockdev_xnvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:15:12.025 20:25:56 blockdev_xnvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:15:12.025 20:25:56 blockdev_xnvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:15:12.025 20:25:56 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == bdev ]] 00:15:12.025 20:25:56 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == crypto_* ]] 00:15:12.025 20:25:56 blockdev_xnvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:15:12.025 20:25:56 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=73810 00:15:12.025 20:25:56 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:15:12.025 20:25:56 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:15:12.025 20:25:56 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 73810 00:15:12.025 20:25:56 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 73810 ']' 00:15:12.025 20:25:56 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:12.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:12.025 20:25:56 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:12.025 20:25:56 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:12.025 20:25:56 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:12.025 20:25:56 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:12.283 [2024-12-12 20:25:56.262469] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:15:12.283 [2024-12-12 20:25:56.262700] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73810 ] 00:15:12.283 [2024-12-12 20:25:56.421845] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:12.541 [2024-12-12 20:25:56.522501] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:13.107 20:25:57 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:13.107 20:25:57 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:15:13.107 20:25:57 blockdev_xnvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:15:13.107 20:25:57 blockdev_xnvme -- bdev/blockdev.sh@766 -- # setup_xnvme_conf 00:15:13.107 20:25:57 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:15:13.107 20:25:57 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:15:13.107 20:25:57 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:13.365 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:13.933 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:15:13.933 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:15:13.933 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:15:13.933 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:15:13.933 20:25:57 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:15:13.933 20:25:57 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:15:13.933 20:25:57 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:15:13.933 20:25:57 blockdev_xnvme -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:15:13.933 20:25:57 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:15:13.933 20:25:57 blockdev_xnvme -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:15:13.933 20:25:57 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:15:13.933 20:25:57 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:15:13.933 20:25:57 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:13.933 20:25:57 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:15:13.933 20:25:57 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:15:13.933 20:25:57 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:15:13.933 20:25:57 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:13.933 20:25:57 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:13.933 20:25:57 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n2 00:15:13.933 20:25:57 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:15:13.933 20:25:57 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:15:13.933 20:25:57 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:13.933 20:25:57 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:13.933 20:25:57 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n3 00:15:13.933 20:25:57 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:15:13.933 20:25:57 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:15:13.933 20:25:57 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:13.933 20:25:57 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:15:13.933 20:25:57 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:15:13.933 20:25:57 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:13.933 20:25:57 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1c1n1 00:15:13.933 20:25:57 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1c1n1 00:15:13.933 20:25:57 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1c1n1/queue/zoned ]] 00:15:13.933 20:25:57 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:13.933 20:25:57 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:15:13.933 20:25:57 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:15:13.933 20:25:57 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:13.933 20:25:57 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:15:13.933 20:25:57 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:15:13.933 20:25:57 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:15:13.933 20:25:57 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:13.933 20:25:57 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:15:13.933 20:25:57 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:15:13.933 20:25:57 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:13.933 20:25:57 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3n1 00:15:13.933 20:25:57 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:15:13.933 20:25:57 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:15:13.933 20:25:57 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:13.933 20:25:57 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:13.933 20:25:57 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:15:13.933 20:25:57 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:13.933 20:25:57 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:13.933 20:25:57 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:13.933 20:25:57 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 00:15:13.933 20:25:57 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:13.933 20:25:57 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:13.933 20:25:57 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:13.933 20:25:57 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 00:15:13.933 20:25:57 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:13.933 20:25:57 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:13.933 20:25:57 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:13.933 20:25:57 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:15:13.933 20:25:57 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:13.933 20:25:57 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:13.933 20:25:57 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:13.933 20:25:57 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:15:13.933 20:25:57 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:13.933 20:25:57 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:13.933 20:25:57 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:13.933 20:25:57 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:15:13.933 20:25:57 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:13.933 20:25:57 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:13.933 20:25:57 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:15:13.933 20:25:57 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:15:13.933 20:25:57 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.933 20:25:57 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:13.933 20:25:57 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring -c' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:15:13.933 nvme0n1 00:15:13.933 nvme0n2 00:15:13.933 nvme0n3 00:15:13.933 nvme1n1 00:15:13.933 nvme2n1 00:15:13.933 nvme3n1 00:15:13.933 20:25:58 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.933 20:25:58 blockdev_xnvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:15:13.933 20:25:58 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.933 20:25:58 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:13.933 20:25:58 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.933 20:25:58 blockdev_xnvme -- bdev/blockdev.sh@777 -- # cat 00:15:13.933 20:25:58 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:15:13.933 20:25:58 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.933 20:25:58 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:13.933 20:25:58 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.933 20:25:58 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:15:13.933 20:25:58 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.933 20:25:58 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:13.933 20:25:58 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.933 20:25:58 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:15:13.933 20:25:58 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.933 20:25:58 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:13.933 20:25:58 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.933 20:25:58 blockdev_xnvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:15:13.933 20:25:58 blockdev_xnvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:15:13.933 20:25:58 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.933 20:25:58 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:13.933 20:25:58 blockdev_xnvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:15:13.933 20:25:58 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.933 20:25:58 blockdev_xnvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:15:13.934 20:25:58 blockdev_xnvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "538dc292-b1df-492a-a140-d6ad53b6c337"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "538dc292-b1df-492a-a140-d6ad53b6c337",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "ffa83eaf-f95b-473e-95bb-68a03a2f383f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ffa83eaf-f95b-473e-95bb-68a03a2f383f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "cc7f78c9-39bc-4386-ab1f-64713e29ee73"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "cc7f78c9-39bc-4386-ab1f-64713e29ee73",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "f86521fd-7984-4aa4-a74a-9411d83c5fdd"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "f86521fd-7984-4aa4-a74a-9411d83c5fdd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "605254fe-e5de-4e57-9d76-2694f636a7c9"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "605254fe-e5de-4e57-9d76-2694f636a7c9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "e8c161eb-40ca-486a-b826-4e16d4b514f6"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "e8c161eb-40ca-486a-b826-4e16d4b514f6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:15:13.934 20:25:58 blockdev_xnvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:15:14.194 20:25:58 blockdev_xnvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:15:14.194 20:25:58 blockdev_xnvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=nvme0n1 00:15:14.194 20:25:58 blockdev_xnvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:15:14.194 20:25:58 blockdev_xnvme -- bdev/blockdev.sh@791 -- # killprocess 73810 00:15:14.194 20:25:58 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 73810 ']' 00:15:14.194 20:25:58 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 73810 00:15:14.194 20:25:58 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:15:14.194 20:25:58 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:14.194 20:25:58 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73810 00:15:14.194 killing process with pid 73810 00:15:14.194 20:25:58 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:14.194 20:25:58 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:14.194 20:25:58 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73810' 00:15:14.194 20:25:58 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 73810 00:15:14.194 20:25:58 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 73810 00:15:15.571 20:25:59 blockdev_xnvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:15.571 20:25:59 blockdev_xnvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:15.571 20:25:59 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:15.571 20:25:59 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:15.571 20:25:59 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:15.571 ************************************ 00:15:15.571 START TEST bdev_hello_world 00:15:15.571 ************************************ 00:15:15.571 20:25:59 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:15.571 [2024-12-12 20:25:59.754233] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:15:15.571 [2024-12-12 20:25:59.754490] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74094 ] 00:15:15.832 [2024-12-12 20:25:59.914144] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:15.832 [2024-12-12 20:26:00.009353] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:16.163 [2024-12-12 20:26:00.342649] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:15:16.163 [2024-12-12 20:26:00.342687] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:15:16.163 [2024-12-12 20:26:00.342702] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:15:16.163 [2024-12-12 20:26:00.344523] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:15:16.163 [2024-12-12 20:26:00.344922] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:15:16.163 [2024-12-12 20:26:00.344949] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:15:16.163 [2024-12-12 20:26:00.345154] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:15:16.163 00:15:16.163 [2024-12-12 20:26:00.345174] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:15:17.102 ************************************ 00:15:17.102 END TEST bdev_hello_world 00:15:17.102 ************************************ 00:15:17.102 00:15:17.102 real 0m1.338s 00:15:17.102 user 0m1.060s 00:15:17.102 sys 0m0.165s 00:15:17.102 20:26:01 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:17.102 20:26:01 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:15:17.102 20:26:01 blockdev_xnvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:15:17.102 20:26:01 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:17.102 20:26:01 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:17.102 20:26:01 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:17.102 ************************************ 00:15:17.102 START TEST bdev_bounds 00:15:17.102 ************************************ 00:15:17.102 20:26:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:15:17.102 Process bdevio pid: 74125 00:15:17.102 20:26:01 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=74125 00:15:17.102 20:26:01 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:15:17.102 20:26:01 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 74125' 00:15:17.102 20:26:01 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 74125 00:15:17.102 20:26:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 74125 ']' 00:15:17.102 20:26:01 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:17.102 20:26:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:17.102 20:26:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:17.102 20:26:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:17.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:17.102 20:26:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:17.102 20:26:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:17.102 [2024-12-12 20:26:01.131265] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:15:17.102 [2024-12-12 20:26:01.131385] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74125 ] 00:15:17.102 [2024-12-12 20:26:01.285473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:17.360 [2024-12-12 20:26:01.365454] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:15:17.360 [2024-12-12 20:26:01.365494] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:17.360 [2024-12-12 20:26:01.365498] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:15:17.927 20:26:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:17.927 20:26:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:15:17.927 20:26:01 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:15:17.927 I/O targets: 00:15:17.927 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:17.927 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:17.927 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:17.927 nvme1n1: 262144 blocks of 4096 bytes (1024 MiB) 00:15:17.927 nvme2n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:15:17.927 nvme3n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:15:17.927 00:15:17.927 00:15:17.927 CUnit - A unit testing framework for C - Version 2.1-3 00:15:17.927 http://cunit.sourceforge.net/ 00:15:17.927 00:15:17.927 00:15:17.927 Suite: bdevio tests on: nvme3n1 00:15:17.927 Test: blockdev write read block ...passed 00:15:17.927 Test: blockdev write zeroes read block ...passed 00:15:17.927 Test: blockdev write zeroes read no split ...passed 00:15:17.927 Test: blockdev write zeroes read split ...passed 00:15:17.927 Test: blockdev write zeroes read split partial ...passed 00:15:17.927 Test: blockdev reset ...passed 00:15:17.927 Test: blockdev write read 8 blocks ...passed 00:15:17.927 Test: blockdev write read size > 128k ...passed 00:15:17.927 Test: blockdev write read invalid size ...passed 00:15:17.927 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:17.927 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:17.927 Test: blockdev write read max offset ...passed 00:15:17.927 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:17.927 Test: blockdev writev readv 8 blocks ...passed 00:15:17.927 Test: blockdev writev readv 30 x 1block ...passed 00:15:17.927 Test: blockdev writev readv block ...passed 00:15:17.927 Test: blockdev writev readv size > 128k ...passed 00:15:17.927 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:17.927 Test: blockdev comparev and writev ...passed 00:15:17.927 Test: blockdev nvme passthru rw ...passed 00:15:17.927 Test: blockdev nvme passthru vendor specific ...passed 00:15:17.927 Test: blockdev nvme admin passthru ...passed 00:15:17.927 Test: blockdev copy ...passed 00:15:17.927 Suite: bdevio tests on: nvme2n1 00:15:17.927 Test: blockdev write read block ...passed 00:15:17.927 Test: blockdev write zeroes read block ...passed 00:15:17.927 Test: blockdev write zeroes read no split ...passed 00:15:17.927 Test: blockdev write zeroes read split ...passed 00:15:17.927 Test: blockdev write zeroes read split partial ...passed 00:15:17.927 Test: blockdev reset ...passed 00:15:17.927 Test: blockdev write read 8 blocks ...passed 00:15:17.927 Test: blockdev write read size > 128k ...passed 00:15:17.927 Test: blockdev write read invalid size ...passed 00:15:17.927 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:17.927 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:17.927 Test: blockdev write read max offset ...passed 00:15:17.927 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:17.927 Test: blockdev writev readv 8 blocks ...passed 00:15:17.927 Test: blockdev writev readv 30 x 1block ...passed 00:15:17.927 Test: blockdev writev readv block ...passed 00:15:17.927 Test: blockdev writev readv size > 128k ...passed 00:15:17.927 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:17.927 Test: blockdev comparev and writev ...passed 00:15:17.927 Test: blockdev nvme passthru rw ...passed 00:15:17.927 Test: blockdev nvme passthru vendor specific ...passed 00:15:17.927 Test: blockdev nvme admin passthru ...passed 00:15:17.927 Test: blockdev copy ...passed 00:15:17.927 Suite: bdevio tests on: nvme1n1 00:15:17.927 Test: blockdev write read block ...passed 00:15:17.927 Test: blockdev write zeroes read block ...passed 00:15:18.186 Test: blockdev write zeroes read no split ...passed 00:15:18.186 Test: blockdev write zeroes read split ...passed 00:15:18.186 Test: blockdev write zeroes read split partial ...passed 00:15:18.186 Test: blockdev reset ...passed 00:15:18.186 Test: blockdev write read 8 blocks ...passed 00:15:18.186 Test: blockdev write read size > 128k ...passed 00:15:18.186 Test: blockdev write read invalid size ...passed 00:15:18.186 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:18.186 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:18.186 Test: blockdev write read max offset ...passed 00:15:18.186 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:18.186 Test: blockdev writev readv 8 blocks ...passed 00:15:18.186 Test: blockdev writev readv 30 x 1block ...passed 00:15:18.186 Test: blockdev writev readv block ...passed 00:15:18.186 Test: blockdev writev readv size > 128k ...passed 00:15:18.186 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:18.186 Test: blockdev comparev and writev ...passed 00:15:18.186 Test: blockdev nvme passthru rw ...passed 00:15:18.186 Test: blockdev nvme passthru vendor specific ...passed 00:15:18.186 Test: blockdev nvme admin passthru ...passed 00:15:18.186 Test: blockdev copy ...passed 00:15:18.186 Suite: bdevio tests on: nvme0n3 00:15:18.186 Test: blockdev write read block ...passed 00:15:18.186 Test: blockdev write zeroes read block ...passed 00:15:18.186 Test: blockdev write zeroes read no split ...passed 00:15:18.186 Test: blockdev write zeroes read split ...passed 00:15:18.186 Test: blockdev write zeroes read split partial ...passed 00:15:18.186 Test: blockdev reset ...passed 00:15:18.186 Test: blockdev write read 8 blocks ...passed 00:15:18.186 Test: blockdev write read size > 128k ...passed 00:15:18.186 Test: blockdev write read invalid size ...passed 00:15:18.186 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:18.186 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:18.186 Test: blockdev write read max offset ...passed 00:15:18.186 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:18.186 Test: blockdev writev readv 8 blocks ...passed 00:15:18.186 Test: blockdev writev readv 30 x 1block ...passed 00:15:18.186 Test: blockdev writev readv block ...passed 00:15:18.186 Test: blockdev writev readv size > 128k ...passed 00:15:18.186 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:18.186 Test: blockdev comparev and writev ...passed 00:15:18.186 Test: blockdev nvme passthru rw ...passed 00:15:18.186 Test: blockdev nvme passthru vendor specific ...passed 00:15:18.186 Test: blockdev nvme admin passthru ...passed 00:15:18.186 Test: blockdev copy ...passed 00:15:18.186 Suite: bdevio tests on: nvme0n2 00:15:18.186 Test: blockdev write read block ...passed 00:15:18.186 Test: blockdev write zeroes read block ...passed 00:15:18.186 Test: blockdev write zeroes read no split ...passed 00:15:18.186 Test: blockdev write zeroes read split ...passed 00:15:18.186 Test: blockdev write zeroes read split partial ...passed 00:15:18.186 Test: blockdev reset ...passed 00:15:18.186 Test: blockdev write read 8 blocks ...passed 00:15:18.186 Test: blockdev write read size > 128k ...passed 00:15:18.186 Test: blockdev write read invalid size ...passed 00:15:18.186 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:18.186 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:18.186 Test: blockdev write read max offset ...passed 00:15:18.186 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:18.186 Test: blockdev writev readv 8 blocks ...passed 00:15:18.186 Test: blockdev writev readv 30 x 1block ...passed 00:15:18.186 Test: blockdev writev readv block ...passed 00:15:18.186 Test: blockdev writev readv size > 128k ...passed 00:15:18.186 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:18.186 Test: blockdev comparev and writev ...passed 00:15:18.186 Test: blockdev nvme passthru rw ...passed 00:15:18.186 Test: blockdev nvme passthru vendor specific ...passed 00:15:18.186 Test: blockdev nvme admin passthru ...passed 00:15:18.186 Test: blockdev copy ...passed 00:15:18.186 Suite: bdevio tests on: nvme0n1 00:15:18.186 Test: blockdev write read block ...passed 00:15:18.186 Test: blockdev write zeroes read block ...passed 00:15:18.186 Test: blockdev write zeroes read no split ...passed 00:15:18.186 Test: blockdev write zeroes read split ...passed 00:15:18.186 Test: blockdev write zeroes read split partial ...passed 00:15:18.186 Test: blockdev reset ...passed 00:15:18.186 Test: blockdev write read 8 blocks ...passed 00:15:18.186 Test: blockdev write read size > 128k ...passed 00:15:18.186 Test: blockdev write read invalid size ...passed 00:15:18.186 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:18.186 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:18.186 Test: blockdev write read max offset ...passed 00:15:18.186 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:18.186 Test: blockdev writev readv 8 blocks ...passed 00:15:18.186 Test: blockdev writev readv 30 x 1block ...passed 00:15:18.186 Test: blockdev writev readv block ...passed 00:15:18.186 Test: blockdev writev readv size > 128k ...passed 00:15:18.186 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:18.186 Test: blockdev comparev and writev ...passed 00:15:18.186 Test: blockdev nvme passthru rw ...passed 00:15:18.186 Test: blockdev nvme passthru vendor specific ...passed 00:15:18.186 Test: blockdev nvme admin passthru ...passed 00:15:18.186 Test: blockdev copy ...passed 00:15:18.186 00:15:18.187 Run Summary: Type Total Ran Passed Failed Inactive 00:15:18.187 suites 6 6 n/a 0 0 00:15:18.187 tests 138 138 138 0 0 00:15:18.187 asserts 780 780 780 0 n/a 00:15:18.187 00:15:18.187 Elapsed time = 0.905 seconds 00:15:18.187 0 00:15:18.187 20:26:02 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 74125 00:15:18.187 20:26:02 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 74125 ']' 00:15:18.187 20:26:02 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 74125 00:15:18.187 20:26:02 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:15:18.187 20:26:02 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:18.187 20:26:02 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74125 00:15:18.187 killing process with pid 74125 00:15:18.187 20:26:02 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:18.187 20:26:02 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:18.187 20:26:02 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74125' 00:15:18.187 20:26:02 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 74125 00:15:18.187 20:26:02 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 74125 00:15:18.754 20:26:02 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:15:18.754 00:15:18.754 real 0m1.908s 00:15:18.754 user 0m4.811s 00:15:18.754 sys 0m0.258s 00:15:18.754 20:26:02 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:18.754 20:26:02 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:18.754 ************************************ 00:15:18.754 END TEST bdev_bounds 00:15:18.754 ************************************ 00:15:19.011 20:26:03 blockdev_xnvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:15:19.012 20:26:03 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:19.012 20:26:03 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:19.012 20:26:03 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:19.012 ************************************ 00:15:19.012 START TEST bdev_nbd 00:15:19.012 ************************************ 00:15:19.012 20:26:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:15:19.012 20:26:03 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:15:19.012 20:26:03 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:15:19.012 20:26:03 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:19.012 20:26:03 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:19.012 20:26:03 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:19.012 20:26:03 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:15:19.012 20:26:03 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:15:19.012 20:26:03 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:15:19.012 20:26:03 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:15:19.012 20:26:03 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:15:19.012 20:26:03 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:15:19.012 20:26:03 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:19.012 20:26:03 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:15:19.012 20:26:03 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:19.012 20:26:03 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:15:19.012 20:26:03 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=74183 00:15:19.012 20:26:03 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:15:19.012 20:26:03 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 74183 /var/tmp/spdk-nbd.sock 00:15:19.012 20:26:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 74183 ']' 00:15:19.012 20:26:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:15:19.012 20:26:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:19.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:15:19.012 20:26:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:15:19.012 20:26:03 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:19.012 20:26:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:19.012 20:26:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:19.012 [2024-12-12 20:26:03.085917] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:15:19.012 [2024-12-12 20:26:03.086034] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:19.270 [2024-12-12 20:26:03.241401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:19.270 [2024-12-12 20:26:03.318979] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:19.836 20:26:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:19.836 20:26:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:15:19.836 20:26:03 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:15:19.836 20:26:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:19.836 20:26:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:19.836 20:26:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:15:19.836 20:26:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:15:19.836 20:26:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:19.836 20:26:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:19.836 20:26:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:15:19.836 20:26:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:15:19.836 20:26:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:15:19.836 20:26:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:15:19.836 20:26:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:19.836 20:26:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:15:20.094 20:26:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:15:20.094 20:26:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:15:20.094 20:26:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:15:20.094 20:26:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:20.094 20:26:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:20.094 20:26:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:20.094 20:26:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:20.094 20:26:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:20.094 20:26:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:20.094 20:26:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:20.094 20:26:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:20.094 20:26:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:20.094 1+0 records in 00:15:20.094 1+0 records out 00:15:20.094 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000469521 s, 8.7 MB/s 00:15:20.094 20:26:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:20.094 20:26:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:20.094 20:26:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:20.094 20:26:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:20.094 20:26:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:20.094 20:26:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:20.094 20:26:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:20.094 20:26:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 00:15:20.353 20:26:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:15:20.353 20:26:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:15:20.353 20:26:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:15:20.353 20:26:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:20.353 20:26:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:20.353 20:26:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:20.353 20:26:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:20.353 20:26:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:20.353 20:26:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:20.353 20:26:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:20.353 20:26:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:20.353 20:26:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:20.353 1+0 records in 00:15:20.353 1+0 records out 00:15:20.353 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000469599 s, 8.7 MB/s 00:15:20.353 20:26:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:20.353 20:26:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:20.353 20:26:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:20.353 20:26:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:20.353 20:26:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:20.353 20:26:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:20.353 20:26:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:20.353 20:26:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 00:15:20.611 20:26:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:15:20.611 20:26:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:15:20.611 20:26:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:15:20.611 20:26:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:15:20.611 20:26:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:20.611 20:26:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:20.611 20:26:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:20.611 20:26:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:15:20.611 20:26:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:20.611 20:26:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:20.611 20:26:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:20.611 20:26:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:20.611 1+0 records in 00:15:20.611 1+0 records out 00:15:20.611 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000589457 s, 6.9 MB/s 00:15:20.611 20:26:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:20.611 20:26:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:20.611 20:26:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:20.611 20:26:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:20.611 20:26:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:20.611 20:26:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:20.611 20:26:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:20.611 20:26:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:15:20.869 20:26:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:15:20.869 20:26:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:15:20.869 20:26:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:15:20.869 20:26:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:15:20.869 20:26:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:20.869 20:26:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:20.869 20:26:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:20.869 20:26:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:15:20.869 20:26:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:20.869 20:26:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:20.869 20:26:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:20.869 20:26:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:20.869 1+0 records in 00:15:20.869 1+0 records out 00:15:20.869 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000379133 s, 10.8 MB/s 00:15:20.869 20:26:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:20.869 20:26:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:20.869 20:26:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:20.869 20:26:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:20.869 20:26:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:20.869 20:26:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:20.869 20:26:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:20.869 20:26:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:15:20.869 20:26:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:15:20.869 20:26:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:15:20.869 20:26:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:15:20.869 20:26:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:15:20.869 20:26:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:20.869 20:26:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:20.869 20:26:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:20.869 20:26:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:15:20.869 20:26:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:20.869 20:26:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:20.869 20:26:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:20.869 20:26:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:20.869 1+0 records in 00:15:20.869 1+0 records out 00:15:20.869 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000348409 s, 11.8 MB/s 00:15:20.869 20:26:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:20.869 20:26:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:20.869 20:26:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:20.869 20:26:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:20.870 20:26:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:20.870 20:26:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:20.870 20:26:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:20.870 20:26:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:15:21.129 20:26:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:15:21.129 20:26:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:15:21.129 20:26:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:15:21.129 20:26:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:15:21.129 20:26:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:21.129 20:26:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:21.129 20:26:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:21.129 20:26:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:15:21.129 20:26:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:21.129 20:26:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:21.129 20:26:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:21.129 20:26:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:21.129 1+0 records in 00:15:21.129 1+0 records out 00:15:21.129 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000790514 s, 5.2 MB/s 00:15:21.129 20:26:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:21.129 20:26:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:21.129 20:26:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:21.129 20:26:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:21.129 20:26:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:21.129 20:26:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:21.129 20:26:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:21.129 20:26:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:21.387 20:26:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:15:21.387 { 00:15:21.387 "nbd_device": "/dev/nbd0", 00:15:21.387 "bdev_name": "nvme0n1" 00:15:21.387 }, 00:15:21.387 { 00:15:21.387 "nbd_device": "/dev/nbd1", 00:15:21.388 "bdev_name": "nvme0n2" 00:15:21.388 }, 00:15:21.388 { 00:15:21.388 "nbd_device": "/dev/nbd2", 00:15:21.388 "bdev_name": "nvme0n3" 00:15:21.388 }, 00:15:21.388 { 00:15:21.388 "nbd_device": "/dev/nbd3", 00:15:21.388 "bdev_name": "nvme1n1" 00:15:21.388 }, 00:15:21.388 { 00:15:21.388 "nbd_device": "/dev/nbd4", 00:15:21.388 "bdev_name": "nvme2n1" 00:15:21.388 }, 00:15:21.388 { 00:15:21.388 "nbd_device": "/dev/nbd5", 00:15:21.388 "bdev_name": "nvme3n1" 00:15:21.388 } 00:15:21.388 ]' 00:15:21.388 20:26:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:15:21.388 20:26:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:15:21.388 { 00:15:21.388 "nbd_device": "/dev/nbd0", 00:15:21.388 "bdev_name": "nvme0n1" 00:15:21.388 }, 00:15:21.388 { 00:15:21.388 "nbd_device": "/dev/nbd1", 00:15:21.388 "bdev_name": "nvme0n2" 00:15:21.388 }, 00:15:21.388 { 00:15:21.388 "nbd_device": "/dev/nbd2", 00:15:21.388 "bdev_name": "nvme0n3" 00:15:21.388 }, 00:15:21.388 { 00:15:21.388 "nbd_device": "/dev/nbd3", 00:15:21.388 "bdev_name": "nvme1n1" 00:15:21.388 }, 00:15:21.388 { 00:15:21.388 "nbd_device": "/dev/nbd4", 00:15:21.388 "bdev_name": "nvme2n1" 00:15:21.388 }, 00:15:21.388 { 00:15:21.388 "nbd_device": "/dev/nbd5", 00:15:21.388 "bdev_name": "nvme3n1" 00:15:21.388 } 00:15:21.388 ]' 00:15:21.388 20:26:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:15:21.388 20:26:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:15:21.388 20:26:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:21.388 20:26:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:15:21.388 20:26:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:21.388 20:26:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:21.388 20:26:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:21.388 20:26:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:21.650 20:26:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:21.650 20:26:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:21.650 20:26:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:21.650 20:26:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:21.650 20:26:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:21.650 20:26:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:21.650 20:26:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:21.650 20:26:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:21.650 20:26:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:21.650 20:26:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:21.919 20:26:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:21.919 20:26:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:21.919 20:26:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:21.919 20:26:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:21.919 20:26:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:21.919 20:26:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:21.919 20:26:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:21.919 20:26:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:21.919 20:26:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:21.919 20:26:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:15:22.178 20:26:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:15:22.178 20:26:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:15:22.178 20:26:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:15:22.178 20:26:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:22.178 20:26:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:22.178 20:26:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:15:22.178 20:26:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:22.178 20:26:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:22.178 20:26:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:22.178 20:26:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:15:22.438 20:26:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:15:22.438 20:26:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:15:22.438 20:26:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:15:22.438 20:26:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:22.438 20:26:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:22.438 20:26:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:15:22.438 20:26:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:22.438 20:26:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:22.438 20:26:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:22.438 20:26:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:15:22.438 20:26:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:15:22.438 20:26:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:15:22.438 20:26:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:15:22.438 20:26:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:22.438 20:26:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:22.438 20:26:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:15:22.438 20:26:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:22.438 20:26:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:22.438 20:26:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:22.438 20:26:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:15:22.700 20:26:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:15:22.700 20:26:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:15:22.700 20:26:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:15:22.700 20:26:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:22.700 20:26:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:22.700 20:26:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:15:22.700 20:26:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:22.700 20:26:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:22.700 20:26:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:22.700 20:26:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:22.700 20:26:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:22.961 20:26:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:22.961 20:26:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:22.961 20:26:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:22.961 20:26:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:22.961 20:26:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:22.961 20:26:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:22.961 20:26:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:22.961 20:26:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:22.961 20:26:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:22.961 20:26:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:15:22.961 20:26:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:15:22.961 20:26:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:15:22.961 20:26:07 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:22.961 20:26:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:22.961 20:26:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:22.961 20:26:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:15:22.961 20:26:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:22.961 20:26:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:15:22.961 20:26:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:22.961 20:26:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:22.961 20:26:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:22.961 20:26:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:22.961 20:26:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:22.961 20:26:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:22.961 20:26:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:15:22.961 20:26:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:22.961 20:26:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:22.961 20:26:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:15:23.221 /dev/nbd0 00:15:23.221 20:26:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:23.221 20:26:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:23.221 20:26:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:23.221 20:26:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:23.221 20:26:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:23.221 20:26:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:23.221 20:26:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:23.221 20:26:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:23.221 20:26:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:23.221 20:26:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:23.221 20:26:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:23.221 1+0 records in 00:15:23.221 1+0 records out 00:15:23.221 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000250484 s, 16.4 MB/s 00:15:23.221 20:26:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:23.221 20:26:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:23.221 20:26:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:23.221 20:26:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:23.221 20:26:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:23.221 20:26:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:23.221 20:26:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:23.221 20:26:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 00:15:23.481 /dev/nbd1 00:15:23.481 20:26:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:23.481 20:26:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:23.481 20:26:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:23.481 20:26:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:23.481 20:26:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:23.481 20:26:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:23.481 20:26:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:23.481 20:26:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:23.481 20:26:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:23.481 20:26:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:23.481 20:26:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:23.481 1+0 records in 00:15:23.481 1+0 records out 00:15:23.482 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000302284 s, 13.6 MB/s 00:15:23.482 20:26:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:23.482 20:26:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:23.482 20:26:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:23.482 20:26:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:23.482 20:26:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:23.482 20:26:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:23.482 20:26:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:23.482 20:26:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 00:15:23.743 /dev/nbd10 00:15:23.743 20:26:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:15:23.743 20:26:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:15:23.743 20:26:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:15:23.743 20:26:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:23.743 20:26:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:23.743 20:26:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:23.743 20:26:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:15:23.743 20:26:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:23.743 20:26:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:23.743 20:26:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:23.743 20:26:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:23.743 1+0 records in 00:15:23.743 1+0 records out 00:15:23.743 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000397862 s, 10.3 MB/s 00:15:23.743 20:26:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:23.743 20:26:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:23.743 20:26:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:23.743 20:26:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:23.743 20:26:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:23.743 20:26:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:23.743 20:26:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:23.743 20:26:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 00:15:23.743 /dev/nbd11 00:15:24.004 20:26:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:15:24.004 20:26:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:15:24.004 20:26:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:15:24.004 20:26:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:24.004 20:26:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:24.004 20:26:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:24.004 20:26:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:15:24.004 20:26:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:24.004 20:26:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:24.004 20:26:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:24.004 20:26:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:24.004 1+0 records in 00:15:24.004 1+0 records out 00:15:24.004 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000594332 s, 6.9 MB/s 00:15:24.004 20:26:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:24.004 20:26:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:24.004 20:26:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:24.004 20:26:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:24.004 20:26:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:24.004 20:26:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:24.004 20:26:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:24.004 20:26:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:15:24.004 /dev/nbd12 00:15:24.004 20:26:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:15:24.004 20:26:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:15:24.004 20:26:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:15:24.004 20:26:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:24.004 20:26:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:24.004 20:26:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:24.004 20:26:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:15:24.004 20:26:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:24.004 20:26:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:24.004 20:26:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:24.004 20:26:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:24.004 1+0 records in 00:15:24.004 1+0 records out 00:15:24.004 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000498077 s, 8.2 MB/s 00:15:24.004 20:26:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:24.004 20:26:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:24.004 20:26:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:24.263 20:26:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:24.263 20:26:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:24.263 20:26:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:24.263 20:26:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:24.263 20:26:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:15:24.263 /dev/nbd13 00:15:24.263 20:26:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:15:24.263 20:26:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:15:24.263 20:26:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:15:24.263 20:26:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:24.263 20:26:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:24.263 20:26:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:24.263 20:26:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:15:24.263 20:26:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:24.263 20:26:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:24.263 20:26:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:24.263 20:26:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:24.263 1+0 records in 00:15:24.263 1+0 records out 00:15:24.263 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000763107 s, 5.4 MB/s 00:15:24.263 20:26:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:24.263 20:26:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:24.263 20:26:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:24.263 20:26:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:24.263 20:26:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:24.263 20:26:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:24.263 20:26:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:24.263 20:26:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:24.263 20:26:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:24.263 20:26:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:24.521 20:26:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:15:24.521 { 00:15:24.521 "nbd_device": "/dev/nbd0", 00:15:24.521 "bdev_name": "nvme0n1" 00:15:24.521 }, 00:15:24.521 { 00:15:24.521 "nbd_device": "/dev/nbd1", 00:15:24.521 "bdev_name": "nvme0n2" 00:15:24.521 }, 00:15:24.521 { 00:15:24.521 "nbd_device": "/dev/nbd10", 00:15:24.521 "bdev_name": "nvme0n3" 00:15:24.521 }, 00:15:24.521 { 00:15:24.521 "nbd_device": "/dev/nbd11", 00:15:24.521 "bdev_name": "nvme1n1" 00:15:24.521 }, 00:15:24.521 { 00:15:24.521 "nbd_device": "/dev/nbd12", 00:15:24.521 "bdev_name": "nvme2n1" 00:15:24.521 }, 00:15:24.521 { 00:15:24.521 "nbd_device": "/dev/nbd13", 00:15:24.521 "bdev_name": "nvme3n1" 00:15:24.521 } 00:15:24.521 ]' 00:15:24.521 20:26:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:15:24.521 { 00:15:24.521 "nbd_device": "/dev/nbd0", 00:15:24.521 "bdev_name": "nvme0n1" 00:15:24.521 }, 00:15:24.521 { 00:15:24.521 "nbd_device": "/dev/nbd1", 00:15:24.521 "bdev_name": "nvme0n2" 00:15:24.521 }, 00:15:24.521 { 00:15:24.521 "nbd_device": "/dev/nbd10", 00:15:24.521 "bdev_name": "nvme0n3" 00:15:24.521 }, 00:15:24.521 { 00:15:24.521 "nbd_device": "/dev/nbd11", 00:15:24.521 "bdev_name": "nvme1n1" 00:15:24.521 }, 00:15:24.521 { 00:15:24.521 "nbd_device": "/dev/nbd12", 00:15:24.521 "bdev_name": "nvme2n1" 00:15:24.521 }, 00:15:24.521 { 00:15:24.521 "nbd_device": "/dev/nbd13", 00:15:24.521 "bdev_name": "nvme3n1" 00:15:24.521 } 00:15:24.521 ]' 00:15:24.521 20:26:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:24.521 20:26:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:15:24.521 /dev/nbd1 00:15:24.521 /dev/nbd10 00:15:24.521 /dev/nbd11 00:15:24.521 /dev/nbd12 00:15:24.521 /dev/nbd13' 00:15:24.521 20:26:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:24.521 20:26:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:15:24.521 /dev/nbd1 00:15:24.521 /dev/nbd10 00:15:24.521 /dev/nbd11 00:15:24.521 /dev/nbd12 00:15:24.521 /dev/nbd13' 00:15:24.521 20:26:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:15:24.521 20:26:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:15:24.521 20:26:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:15:24.521 20:26:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:15:24.521 20:26:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:15:24.521 20:26:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:24.521 20:26:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:24.521 20:26:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:15:24.521 20:26:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:24.521 20:26:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:15:24.521 20:26:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:15:24.521 256+0 records in 00:15:24.521 256+0 records out 00:15:24.521 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00883822 s, 119 MB/s 00:15:24.521 20:26:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:24.521 20:26:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:15:24.779 256+0 records in 00:15:24.780 256+0 records out 00:15:24.780 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0687343 s, 15.3 MB/s 00:15:24.780 20:26:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:24.780 20:26:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:15:24.780 256+0 records in 00:15:24.780 256+0 records out 00:15:24.780 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0753275 s, 13.9 MB/s 00:15:24.780 20:26:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:24.780 20:26:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:15:24.780 256+0 records in 00:15:24.780 256+0 records out 00:15:24.780 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.06428 s, 16.3 MB/s 00:15:24.780 20:26:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:24.780 20:26:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:15:25.038 256+0 records in 00:15:25.038 256+0 records out 00:15:25.038 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0661484 s, 15.9 MB/s 00:15:25.038 20:26:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:25.038 20:26:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:15:25.038 256+0 records in 00:15:25.038 256+0 records out 00:15:25.038 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0752032 s, 13.9 MB/s 00:15:25.038 20:26:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:25.038 20:26:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:15:25.038 256+0 records in 00:15:25.038 256+0 records out 00:15:25.038 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0735076 s, 14.3 MB/s 00:15:25.038 20:26:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:15:25.038 20:26:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:25.038 20:26:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:25.038 20:26:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:15:25.038 20:26:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:25.038 20:26:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:15:25.038 20:26:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:15:25.038 20:26:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:25.038 20:26:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:15:25.038 20:26:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:25.038 20:26:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:15:25.038 20:26:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:25.038 20:26:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:15:25.038 20:26:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:25.038 20:26:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:15:25.038 20:26:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:25.038 20:26:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:15:25.038 20:26:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:25.038 20:26:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:15:25.038 20:26:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:25.038 20:26:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:25.038 20:26:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:25.038 20:26:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:25.038 20:26:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:25.039 20:26:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:25.039 20:26:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:25.039 20:26:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:25.299 20:26:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:25.299 20:26:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:25.299 20:26:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:25.299 20:26:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:25.299 20:26:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:25.299 20:26:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:25.299 20:26:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:25.299 20:26:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:25.299 20:26:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:25.299 20:26:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:25.558 20:26:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:25.558 20:26:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:25.558 20:26:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:25.558 20:26:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:25.558 20:26:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:25.558 20:26:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:25.558 20:26:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:25.558 20:26:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:25.558 20:26:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:25.558 20:26:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:15:25.816 20:26:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:15:25.816 20:26:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:15:25.816 20:26:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:15:25.816 20:26:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:25.816 20:26:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:25.816 20:26:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:15:25.816 20:26:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:25.816 20:26:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:25.816 20:26:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:25.816 20:26:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:15:26.074 20:26:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:15:26.074 20:26:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:15:26.074 20:26:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:15:26.074 20:26:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:26.074 20:26:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:26.074 20:26:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:15:26.074 20:26:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:26.074 20:26:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:26.074 20:26:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:26.074 20:26:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:15:26.350 20:26:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:15:26.350 20:26:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:15:26.350 20:26:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:15:26.350 20:26:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:26.351 20:26:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:26.351 20:26:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:15:26.351 20:26:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:26.351 20:26:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:26.351 20:26:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:26.351 20:26:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:15:26.351 20:26:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:15:26.351 20:26:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:15:26.351 20:26:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:15:26.351 20:26:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:26.351 20:26:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:26.351 20:26:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:15:26.351 20:26:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:26.351 20:26:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:26.351 20:26:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:26.351 20:26:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:26.351 20:26:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:26.610 20:26:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:26.610 20:26:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:26.610 20:26:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:26.610 20:26:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:26.610 20:26:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:26.610 20:26:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:26.610 20:26:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:26.610 20:26:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:26.610 20:26:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:26.610 20:26:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:15:26.610 20:26:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:15:26.610 20:26:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:15:26.610 20:26:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:26.610 20:26:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:26.610 20:26:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:15:26.610 20:26:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:15:26.867 malloc_lvol_verify 00:15:26.867 20:26:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:15:27.125 ae15785c-e98c-4364-bf8f-72bcc5901f57 00:15:27.125 20:26:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:15:27.382 8b35bff4-5f0c-4314-af7f-0879283b0e1e 00:15:27.382 20:26:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:15:27.640 /dev/nbd0 00:15:27.640 20:26:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:15:27.640 20:26:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:15:27.640 20:26:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:15:27.640 20:26:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:15:27.640 20:26:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:15:27.640 mke2fs 1.47.0 (5-Feb-2023) 00:15:27.640 Discarding device blocks: 0/4096 done 00:15:27.640 Creating filesystem with 4096 1k blocks and 1024 inodes 00:15:27.640 00:15:27.640 Allocating group tables: 0/1 done 00:15:27.640 Writing inode tables: 0/1 done 00:15:27.640 Creating journal (1024 blocks): done 00:15:27.640 Writing superblocks and filesystem accounting information: 0/1 done 00:15:27.640 00:15:27.640 20:26:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:27.640 20:26:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:27.640 20:26:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:27.640 20:26:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:27.640 20:26:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:27.640 20:26:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:27.640 20:26:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:27.640 20:26:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:27.640 20:26:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:27.640 20:26:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:27.640 20:26:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:27.640 20:26:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:27.640 20:26:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:27.640 20:26:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:27.640 20:26:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:27.640 20:26:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 74183 00:15:27.640 20:26:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 74183 ']' 00:15:27.640 20:26:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 74183 00:15:27.640 20:26:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:15:27.640 20:26:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:27.640 20:26:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74183 00:15:27.898 20:26:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:27.898 20:26:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:27.898 killing process with pid 74183 00:15:27.898 20:26:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74183' 00:15:27.898 20:26:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 74183 00:15:27.898 20:26:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 74183 00:15:28.464 20:26:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:15:28.464 00:15:28.464 real 0m9.462s 00:15:28.464 user 0m13.538s 00:15:28.464 sys 0m3.153s 00:15:28.464 20:26:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:28.464 20:26:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:28.464 ************************************ 00:15:28.464 END TEST bdev_nbd 00:15:28.464 ************************************ 00:15:28.464 20:26:12 blockdev_xnvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:15:28.464 20:26:12 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = nvme ']' 00:15:28.464 20:26:12 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = gpt ']' 00:15:28.464 20:26:12 blockdev_xnvme -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:15:28.464 20:26:12 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:28.464 20:26:12 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:28.464 20:26:12 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:28.464 ************************************ 00:15:28.464 START TEST bdev_fio 00:15:28.464 ************************************ 00:15:28.464 20:26:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:15:28.464 20:26:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:15:28.464 20:26:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:15:28.464 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:15:28.464 20:26:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:15:28.464 20:26:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:15:28.464 20:26:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:15:28.464 20:26:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:15:28.464 20:26:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:15:28.465 20:26:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:28.465 20:26:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:15:28.465 20:26:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:15:28.465 20:26:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:15:28.465 20:26:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:15:28.465 20:26:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:15:28.465 20:26:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:15:28.465 20:26:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:15:28.465 20:26:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:28.465 20:26:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:15:28.465 20:26:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:15:28.465 20:26:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:15:28.465 20:26:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:15:28.465 20:26:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:15:28.465 20:26:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:15:28.465 20:26:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:15:28.465 20:26:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:28.465 20:26:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:15:28.465 20:26:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:15:28.465 20:26:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:28.465 20:26:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n2]' 00:15:28.465 20:26:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n2 00:15:28.465 20:26:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:28.465 20:26:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n3]' 00:15:28.465 20:26:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n3 00:15:28.465 20:26:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:28.465 20:26:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:15:28.465 20:26:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:15:28.465 20:26:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:28.465 20:26:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:15:28.465 20:26:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:15:28.465 20:26:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:28.465 20:26:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:15:28.465 20:26:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:15:28.465 20:26:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:15:28.465 20:26:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:28.465 20:26:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:15:28.465 20:26:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:28.465 20:26:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:15:28.465 ************************************ 00:15:28.465 START TEST bdev_fio_rw_verify 00:15:28.465 ************************************ 00:15:28.465 20:26:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:28.465 20:26:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:28.465 20:26:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:28.465 20:26:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:28.465 20:26:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:28.465 20:26:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:28.465 20:26:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:15:28.465 20:26:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:28.465 20:26:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:28.465 20:26:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:15:28.465 20:26:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:28.465 20:26:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:28.465 20:26:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:28.465 20:26:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:28.465 20:26:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:15:28.465 20:26:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:28.465 20:26:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:28.723 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:28.723 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:28.723 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:28.723 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:28.723 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:28.723 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:28.723 fio-3.35 00:15:28.723 Starting 6 threads 00:15:40.933 00:15:40.933 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=74575: Thu Dec 12 20:26:23 2024 00:15:40.933 read: IOPS=32.2k, BW=126MiB/s (132MB/s)(1259MiB/10003msec) 00:15:40.933 slat (usec): min=2, max=1951, avg= 5.26, stdev=10.01 00:15:40.933 clat (usec): min=66, max=8288, avg=553.98, stdev=495.81 00:15:40.933 lat (usec): min=77, max=8304, avg=559.24, stdev=496.61 00:15:40.933 clat percentiles (usec): 00:15:40.933 | 50.000th=[ 400], 99.000th=[ 2540], 99.900th=[ 3916], 99.990th=[ 5866], 00:15:40.933 | 99.999th=[ 8291] 00:15:40.933 write: IOPS=32.6k, BW=127MiB/s (133MB/s)(1273MiB/10003msec); 0 zone resets 00:15:40.933 slat (usec): min=10, max=5351, avg=25.96, stdev=70.92 00:15:40.933 clat (usec): min=69, max=9586, avg=686.43, stdev=562.44 00:15:40.933 lat (usec): min=87, max=9611, avg=712.39, stdev=572.88 00:15:40.933 clat percentiles (usec): 00:15:40.933 | 50.000th=[ 506], 99.000th=[ 2868], 99.900th=[ 4293], 99.990th=[ 5473], 00:15:40.933 | 99.999th=[ 9503] 00:15:40.933 bw ( KiB/s): min=54707, max=202463, per=100.00%, avg=133268.68, stdev=8645.74, samples=114 00:15:40.933 iops : min=13676, max=50615, avg=33316.84, stdev=2161.43, samples=114 00:15:40.933 lat (usec) : 100=0.05%, 250=15.77%, 500=40.93%, 750=20.71%, 1000=7.64% 00:15:40.933 lat (msec) : 2=11.54%, 4=3.23%, 10=0.11% 00:15:40.933 cpu : usr=47.15%, sys=32.30%, ctx=8493, majf=0, minf=26921 00:15:40.933 IO depths : 1=12.0%, 2=24.5%, 4=50.5%, 8=13.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:40.933 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:40.933 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:40.933 issued rwts: total=322325,325999,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:40.933 latency : target=0, window=0, percentile=100.00%, depth=8 00:15:40.933 00:15:40.933 Run status group 0 (all jobs): 00:15:40.933 READ: bw=126MiB/s (132MB/s), 126MiB/s-126MiB/s (132MB/s-132MB/s), io=1259MiB (1320MB), run=10003-10003msec 00:15:40.933 WRITE: bw=127MiB/s (133MB/s), 127MiB/s-127MiB/s (133MB/s-133MB/s), io=1273MiB (1335MB), run=10003-10003msec 00:15:40.933 ----------------------------------------------------- 00:15:40.933 Suppressions used: 00:15:40.933 count bytes template 00:15:40.933 6 48 /usr/src/fio/parse.c 00:15:40.933 3456 331776 /usr/src/fio/iolog.c 00:15:40.933 1 8 libtcmalloc_minimal.so 00:15:40.933 1 904 libcrypto.so 00:15:40.933 ----------------------------------------------------- 00:15:40.933 00:15:40.933 00:15:40.933 real 0m11.921s 00:15:40.933 user 0m29.815s 00:15:40.933 sys 0m19.655s 00:15:40.933 20:26:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:40.933 ************************************ 00:15:40.933 END TEST bdev_fio_rw_verify 00:15:40.933 ************************************ 00:15:40.933 20:26:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:15:40.933 20:26:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:15:40.934 20:26:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:40.934 20:26:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:15:40.934 20:26:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:40.934 20:26:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:15:40.934 20:26:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:15:40.934 20:26:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:15:40.934 20:26:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:15:40.934 20:26:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:15:40.934 20:26:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:15:40.934 20:26:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:15:40.934 20:26:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:40.934 20:26:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:15:40.934 20:26:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:15:40.934 20:26:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:15:40.934 20:26:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:15:40.934 20:26:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:15:40.934 20:26:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "538dc292-b1df-492a-a140-d6ad53b6c337"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "538dc292-b1df-492a-a140-d6ad53b6c337",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "ffa83eaf-f95b-473e-95bb-68a03a2f383f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ffa83eaf-f95b-473e-95bb-68a03a2f383f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "cc7f78c9-39bc-4386-ab1f-64713e29ee73"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "cc7f78c9-39bc-4386-ab1f-64713e29ee73",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "f86521fd-7984-4aa4-a74a-9411d83c5fdd"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "f86521fd-7984-4aa4-a74a-9411d83c5fdd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "605254fe-e5de-4e57-9d76-2694f636a7c9"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "605254fe-e5de-4e57-9d76-2694f636a7c9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "e8c161eb-40ca-486a-b826-4e16d4b514f6"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "e8c161eb-40ca-486a-b826-4e16d4b514f6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:15:40.934 20:26:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:15:40.934 20:26:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:40.934 /home/vagrant/spdk_repo/spdk 00:15:40.934 20:26:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:15:40.934 20:26:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:15:40.934 20:26:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:15:40.934 00:15:40.934 real 0m12.086s 00:15:40.934 user 0m29.882s 00:15:40.934 sys 0m19.741s 00:15:40.934 20:26:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:40.934 ************************************ 00:15:40.934 END TEST bdev_fio 00:15:40.934 ************************************ 00:15:40.934 20:26:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:15:40.934 20:26:24 blockdev_xnvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:40.934 20:26:24 blockdev_xnvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:15:40.934 20:26:24 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:15:40.934 20:26:24 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:40.934 20:26:24 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:40.934 ************************************ 00:15:40.934 START TEST bdev_verify 00:15:40.934 ************************************ 00:15:40.934 20:26:24 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:15:40.934 [2024-12-12 20:26:24.766045] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:15:40.934 [2024-12-12 20:26:24.766216] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74747 ] 00:15:40.934 [2024-12-12 20:26:24.935267] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:40.934 [2024-12-12 20:26:25.105196] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:15:40.934 [2024-12-12 20:26:25.105240] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:41.504 Running I/O for 5 seconds... 00:15:43.827 23264.00 IOPS, 90.88 MiB/s [2024-12-12T20:26:28.998Z] 23008.00 IOPS, 89.88 MiB/s [2024-12-12T20:26:29.943Z] 23413.33 IOPS, 91.46 MiB/s [2024-12-12T20:26:30.886Z] 23248.00 IOPS, 90.81 MiB/s [2024-12-12T20:26:30.886Z] 23545.60 IOPS, 91.98 MiB/s 00:15:46.658 Latency(us) 00:15:46.658 [2024-12-12T20:26:30.886Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:46.658 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:46.658 Verification LBA range: start 0x0 length 0x80000 00:15:46.658 nvme0n1 : 5.03 1907.67 7.45 0.00 0.00 66977.62 8166.79 64124.46 00:15:46.658 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:46.658 Verification LBA range: start 0x80000 length 0x80000 00:15:46.658 nvme0n1 : 5.02 1783.30 6.97 0.00 0.00 71628.71 8721.33 67350.84 00:15:46.658 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:46.658 Verification LBA range: start 0x0 length 0x80000 00:15:46.658 nvme0n2 : 5.03 1906.86 7.45 0.00 0.00 66900.42 9326.28 63317.86 00:15:46.658 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:46.658 Verification LBA range: start 0x80000 length 0x80000 00:15:46.658 nvme0n2 : 5.05 1775.72 6.94 0.00 0.00 71768.71 9779.99 69770.63 00:15:46.658 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:46.658 Verification LBA range: start 0x0 length 0x80000 00:15:46.658 nvme0n3 : 5.07 1891.66 7.39 0.00 0.00 67326.28 10284.11 58881.58 00:15:46.658 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:46.658 Verification LBA range: start 0x80000 length 0x80000 00:15:46.658 nvme0n3 : 5.05 1774.70 6.93 0.00 0.00 71642.70 9931.22 66947.54 00:15:46.658 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:46.658 Verification LBA range: start 0x0 length 0x20000 00:15:46.658 nvme1n1 : 5.08 1890.87 7.39 0.00 0.00 67245.45 12603.08 55655.19 00:15:46.658 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:46.658 Verification LBA range: start 0x20000 length 0x20000 00:15:46.658 nvme1n1 : 5.07 1766.36 6.90 0.00 0.00 71818.65 7511.43 65334.35 00:15:46.658 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:46.658 Verification LBA range: start 0x0 length 0xa0000 00:15:46.658 nvme2n1 : 5.09 1887.50 7.37 0.00 0.00 67248.88 10889.06 65737.65 00:15:46.658 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:46.658 Verification LBA range: start 0xa0000 length 0xa0000 00:15:46.658 nvme2n1 : 5.06 1769.77 6.91 0.00 0.00 71539.84 11141.12 62914.56 00:15:46.658 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:46.658 Verification LBA range: start 0x0 length 0xbd0bd 00:15:46.659 nvme3n1 : 5.09 2519.27 9.84 0.00 0.00 50241.84 4612.73 61301.37 00:15:46.659 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:46.659 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:15:46.659 nvme3n1 : 5.08 2381.37 9.30 0.00 0.00 53053.81 6604.01 58478.28 00:15:46.659 [2024-12-12T20:26:30.887Z] =================================================================================================================== 00:15:46.659 [2024-12-12T20:26:30.887Z] Total : 23255.06 90.84 0.00 0.00 65578.29 4612.73 69770.63 00:15:47.604 00:15:47.604 real 0m7.016s 00:15:47.604 user 0m11.137s 00:15:47.604 sys 0m1.654s 00:15:47.604 20:26:31 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:47.604 ************************************ 00:15:47.604 END TEST bdev_verify 00:15:47.604 ************************************ 00:15:47.604 20:26:31 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:15:47.604 20:26:31 blockdev_xnvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:15:47.604 20:26:31 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:15:47.604 20:26:31 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:47.604 20:26:31 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:47.604 ************************************ 00:15:47.604 START TEST bdev_verify_big_io 00:15:47.604 ************************************ 00:15:47.604 20:26:31 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:15:47.865 [2024-12-12 20:26:31.851465] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:15:47.865 [2024-12-12 20:26:31.851624] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74851 ] 00:15:47.865 [2024-12-12 20:26:32.018996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:48.128 [2024-12-12 20:26:32.184807] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:15:48.128 [2024-12-12 20:26:32.184907] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:48.700 Running I/O for 5 seconds... 00:15:54.791 1712.00 IOPS, 107.00 MiB/s [2024-12-12T20:26:39.019Z] 3139.00 IOPS, 196.19 MiB/s 00:15:54.791 Latency(us) 00:15:54.791 [2024-12-12T20:26:39.019Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:54.791 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:54.791 Verification LBA range: start 0x0 length 0x8000 00:15:54.791 nvme0n1 : 6.05 116.33 7.27 0.00 0.00 1081677.55 18047.61 1090519.04 00:15:54.791 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:54.791 Verification LBA range: start 0x8000 length 0x8000 00:15:54.791 nvme0n1 : 6.05 113.80 7.11 0.00 0.00 1054950.67 31053.98 1109877.37 00:15:54.791 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:54.791 Verification LBA range: start 0x0 length 0x8000 00:15:54.791 nvme0n2 : 6.07 121.35 7.58 0.00 0.00 1007042.56 31053.98 993727.41 00:15:54.791 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:54.791 Verification LBA range: start 0x8000 length 0x8000 00:15:54.791 nvme0n2 : 6.05 92.51 5.78 0.00 0.00 1298480.25 95178.44 2000360.37 00:15:54.791 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:54.791 Verification LBA range: start 0x0 length 0x8000 00:15:54.791 nvme0n3 : 6.07 102.85 6.43 0.00 0.00 1154333.91 31658.93 1361535.61 00:15:54.791 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:54.791 Verification LBA range: start 0x8000 length 0x8000 00:15:54.791 nvme0n3 : 6.05 103.17 6.45 0.00 0.00 1101940.66 116956.55 1426063.36 00:15:54.791 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:54.791 Verification LBA range: start 0x0 length 0x2000 00:15:54.791 nvme1n1 : 6.05 105.70 6.61 0.00 0.00 1048973.59 4436.28 1038896.84 00:15:54.791 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:54.791 Verification LBA range: start 0x2000 length 0x2000 00:15:54.791 nvme1n1 : 6.06 100.39 6.27 0.00 0.00 1117288.64 141154.46 2477865.75 00:15:54.791 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:54.791 Verification LBA range: start 0x0 length 0xa000 00:15:54.791 nvme2n1 : 6.06 95.08 5.94 0.00 0.00 1174904.87 16837.71 2606921.26 00:15:54.791 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:54.791 Verification LBA range: start 0xa000 length 0xa000 00:15:54.791 nvme2n1 : 6.07 135.24 8.45 0.00 0.00 811468.71 10838.65 1142141.24 00:15:54.791 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:54.791 Verification LBA range: start 0x0 length 0xbd0b 00:15:54.791 nvme3n1 : 6.06 137.22 8.58 0.00 0.00 787338.09 7259.37 1090519.04 00:15:54.791 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:54.791 Verification LBA range: start 0xbd0b length 0xbd0b 00:15:54.791 nvme3n1 : 6.07 137.14 8.57 0.00 0.00 773469.23 1865.26 1264743.98 00:15:54.791 [2024-12-12T20:26:39.019Z] =================================================================================================================== 00:15:54.791 [2024-12-12T20:26:39.019Z] Total : 1360.77 85.05 0.00 0.00 1013433.83 1865.26 2606921.26 00:15:55.725 00:15:55.725 real 0m7.995s 00:15:55.725 user 0m14.599s 00:15:55.725 sys 0m0.510s 00:15:55.725 20:26:39 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:55.725 20:26:39 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:15:55.725 ************************************ 00:15:55.725 END TEST bdev_verify_big_io 00:15:55.725 ************************************ 00:15:55.725 20:26:39 blockdev_xnvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:55.725 20:26:39 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:15:55.725 20:26:39 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:55.725 20:26:39 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:55.725 ************************************ 00:15:55.725 START TEST bdev_write_zeroes 00:15:55.725 ************************************ 00:15:55.725 20:26:39 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:55.725 [2024-12-12 20:26:39.873139] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:15:55.725 [2024-12-12 20:26:39.873254] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74967 ] 00:15:55.982 [2024-12-12 20:26:40.033941] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:55.982 [2024-12-12 20:26:40.134536] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:56.547 Running I/O for 1 seconds... 00:15:57.480 76096.00 IOPS, 297.25 MiB/s 00:15:57.480 Latency(us) 00:15:57.480 [2024-12-12T20:26:41.708Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:57.480 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:57.480 nvme0n1 : 1.03 10819.40 42.26 0.00 0.00 11820.03 8771.74 26214.40 00:15:57.480 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:57.480 nvme0n2 : 1.03 10807.21 42.22 0.00 0.00 11825.24 8822.15 23391.31 00:15:57.480 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:57.480 nvme0n3 : 1.03 10794.47 42.17 0.00 0.00 11830.50 8872.57 23693.78 00:15:57.480 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:57.480 nvme1n1 : 1.03 10782.62 42.12 0.00 0.00 11835.53 8973.39 26416.05 00:15:57.480 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:57.480 nvme2n1 : 1.02 10866.84 42.45 0.00 0.00 11733.83 4436.28 29844.09 00:15:57.480 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:57.480 nvme3n1 : 1.03 21112.26 82.47 0.00 0.00 6012.31 2810.49 24601.21 00:15:57.480 [2024-12-12T20:26:41.708Z] =================================================================================================================== 00:15:57.480 [2024-12-12T20:26:41.708Z] Total : 75182.81 293.68 0.00 0.00 10176.75 2810.49 29844.09 00:15:58.045 00:15:58.045 real 0m2.444s 00:15:58.045 user 0m1.703s 00:15:58.045 sys 0m0.586s 00:15:58.045 20:26:42 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:58.045 20:26:42 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:15:58.045 ************************************ 00:15:58.045 END TEST bdev_write_zeroes 00:15:58.045 ************************************ 00:15:58.302 20:26:42 blockdev_xnvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:58.302 20:26:42 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:15:58.302 20:26:42 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:58.302 20:26:42 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:58.302 ************************************ 00:15:58.302 START TEST bdev_json_nonenclosed 00:15:58.302 ************************************ 00:15:58.302 20:26:42 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:58.302 [2024-12-12 20:26:42.346507] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:15:58.302 [2024-12-12 20:26:42.346619] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75015 ] 00:15:58.302 [2024-12-12 20:26:42.505965] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:58.559 [2024-12-12 20:26:42.603427] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:58.559 [2024-12-12 20:26:42.603502] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:15:58.559 [2024-12-12 20:26:42.603519] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:15:58.559 [2024-12-12 20:26:42.603528] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:58.559 00:15:58.559 real 0m0.495s 00:15:58.559 user 0m0.296s 00:15:58.559 sys 0m0.095s 00:15:58.559 20:26:42 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:58.559 20:26:42 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:15:58.559 ************************************ 00:15:58.559 END TEST bdev_json_nonenclosed 00:15:58.559 ************************************ 00:15:58.816 20:26:42 blockdev_xnvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:58.816 20:26:42 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:15:58.816 20:26:42 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:58.816 20:26:42 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:58.816 ************************************ 00:15:58.816 START TEST bdev_json_nonarray 00:15:58.816 ************************************ 00:15:58.816 20:26:42 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:58.816 [2024-12-12 20:26:42.884210] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:15:58.816 [2024-12-12 20:26:42.884328] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75039 ] 00:15:59.073 [2024-12-12 20:26:43.044952] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:59.073 [2024-12-12 20:26:43.141815] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:59.073 [2024-12-12 20:26:43.141899] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:15:59.073 [2024-12-12 20:26:43.141916] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:15:59.073 [2024-12-12 20:26:43.141925] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:59.331 00:15:59.331 real 0m0.495s 00:15:59.331 user 0m0.308s 00:15:59.331 sys 0m0.084s 00:15:59.331 20:26:43 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:59.331 20:26:43 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:15:59.331 ************************************ 00:15:59.331 END TEST bdev_json_nonarray 00:15:59.331 ************************************ 00:15:59.331 20:26:43 blockdev_xnvme -- bdev/blockdev.sh@824 -- # [[ xnvme == bdev ]] 00:15:59.331 20:26:43 blockdev_xnvme -- bdev/blockdev.sh@832 -- # [[ xnvme == gpt ]] 00:15:59.331 20:26:43 blockdev_xnvme -- bdev/blockdev.sh@836 -- # [[ xnvme == crypto_sw ]] 00:15:59.331 20:26:43 blockdev_xnvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:15:59.331 20:26:43 blockdev_xnvme -- bdev/blockdev.sh@849 -- # cleanup 00:15:59.331 20:26:43 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:15:59.331 20:26:43 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:59.331 20:26:43 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:15:59.331 20:26:43 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:15:59.331 20:26:43 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:15:59.331 20:26:43 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:15:59.331 20:26:43 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:59.589 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:46.268 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:46.268 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:16:46.268 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:46.268 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:16:46.268 00:16:46.268 real 1m34.371s 00:16:46.268 user 1m23.700s 00:16:46.268 sys 1m58.903s 00:16:46.268 20:27:30 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:46.268 20:27:30 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:46.268 ************************************ 00:16:46.268 END TEST blockdev_xnvme 00:16:46.268 ************************************ 00:16:46.268 20:27:30 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:16:46.268 20:27:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:46.268 20:27:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:46.268 20:27:30 -- common/autotest_common.sh@10 -- # set +x 00:16:46.268 ************************************ 00:16:46.268 START TEST ublk 00:16:46.268 ************************************ 00:16:46.268 20:27:30 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:16:46.268 * Looking for test storage... 00:16:46.268 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:16:46.268 20:27:30 ublk -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:46.527 20:27:30 ublk -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:46.527 20:27:30 ublk -- common/autotest_common.sh@1711 -- # lcov --version 00:16:46.527 20:27:30 ublk -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:46.527 20:27:30 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:46.527 20:27:30 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:46.527 20:27:30 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:46.527 20:27:30 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:16:46.527 20:27:30 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:16:46.527 20:27:30 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:16:46.527 20:27:30 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:16:46.527 20:27:30 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:16:46.527 20:27:30 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:16:46.527 20:27:30 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:16:46.527 20:27:30 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:46.527 20:27:30 ublk -- scripts/common.sh@344 -- # case "$op" in 00:16:46.527 20:27:30 ublk -- scripts/common.sh@345 -- # : 1 00:16:46.527 20:27:30 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:46.527 20:27:30 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:46.527 20:27:30 ublk -- scripts/common.sh@365 -- # decimal 1 00:16:46.527 20:27:30 ublk -- scripts/common.sh@353 -- # local d=1 00:16:46.527 20:27:30 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:46.527 20:27:30 ublk -- scripts/common.sh@355 -- # echo 1 00:16:46.527 20:27:30 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:16:46.527 20:27:30 ublk -- scripts/common.sh@366 -- # decimal 2 00:16:46.527 20:27:30 ublk -- scripts/common.sh@353 -- # local d=2 00:16:46.527 20:27:30 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:46.527 20:27:30 ublk -- scripts/common.sh@355 -- # echo 2 00:16:46.527 20:27:30 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:16:46.527 20:27:30 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:46.527 20:27:30 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:46.527 20:27:30 ublk -- scripts/common.sh@368 -- # return 0 00:16:46.527 20:27:30 ublk -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:46.527 20:27:30 ublk -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:46.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:46.527 --rc genhtml_branch_coverage=1 00:16:46.527 --rc genhtml_function_coverage=1 00:16:46.527 --rc genhtml_legend=1 00:16:46.527 --rc geninfo_all_blocks=1 00:16:46.527 --rc geninfo_unexecuted_blocks=1 00:16:46.527 00:16:46.527 ' 00:16:46.527 20:27:30 ublk -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:46.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:46.527 --rc genhtml_branch_coverage=1 00:16:46.527 --rc genhtml_function_coverage=1 00:16:46.527 --rc genhtml_legend=1 00:16:46.527 --rc geninfo_all_blocks=1 00:16:46.527 --rc geninfo_unexecuted_blocks=1 00:16:46.527 00:16:46.527 ' 00:16:46.527 20:27:30 ublk -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:46.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:46.527 --rc genhtml_branch_coverage=1 00:16:46.527 --rc genhtml_function_coverage=1 00:16:46.527 --rc genhtml_legend=1 00:16:46.527 --rc geninfo_all_blocks=1 00:16:46.527 --rc geninfo_unexecuted_blocks=1 00:16:46.527 00:16:46.527 ' 00:16:46.527 20:27:30 ublk -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:46.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:46.527 --rc genhtml_branch_coverage=1 00:16:46.527 --rc genhtml_function_coverage=1 00:16:46.527 --rc genhtml_legend=1 00:16:46.527 --rc geninfo_all_blocks=1 00:16:46.527 --rc geninfo_unexecuted_blocks=1 00:16:46.527 00:16:46.527 ' 00:16:46.527 20:27:30 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:16:46.527 20:27:30 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:16:46.527 20:27:30 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:16:46.527 20:27:30 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:16:46.527 20:27:30 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:16:46.527 20:27:30 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:16:46.527 20:27:30 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:16:46.527 20:27:30 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:16:46.527 20:27:30 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:16:46.527 20:27:30 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:16:46.527 20:27:30 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:16:46.527 20:27:30 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:16:46.527 20:27:30 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:16:46.527 20:27:30 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:16:46.527 20:27:30 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:16:46.527 20:27:30 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:16:46.527 20:27:30 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:16:46.527 20:27:30 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:16:46.527 20:27:30 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:16:46.527 20:27:30 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:16:46.527 20:27:30 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:46.527 20:27:30 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:46.527 20:27:30 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:46.527 ************************************ 00:16:46.527 START TEST test_save_ublk_config 00:16:46.527 ************************************ 00:16:46.527 20:27:30 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:16:46.527 20:27:30 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:16:46.527 20:27:30 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=75348 00:16:46.527 20:27:30 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:16:46.527 20:27:30 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 75348 00:16:46.527 20:27:30 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:16:46.527 20:27:30 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 75348 ']' 00:16:46.527 20:27:30 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:46.527 20:27:30 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:46.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:46.527 20:27:30 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:46.527 20:27:30 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:46.527 20:27:30 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:46.527 [2024-12-12 20:27:30.657408] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:16:46.527 [2024-12-12 20:27:30.657510] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75348 ] 00:16:46.785 [2024-12-12 20:27:30.806194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:46.785 [2024-12-12 20:27:30.891648] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:47.352 20:27:31 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:47.352 20:27:31 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:16:47.352 20:27:31 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:16:47.352 20:27:31 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:16:47.352 20:27:31 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.352 20:27:31 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:47.352 [2024-12-12 20:27:31.498432] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:47.352 [2024-12-12 20:27:31.499114] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:47.352 malloc0 00:16:47.352 [2024-12-12 20:27:31.546811] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:16:47.352 [2024-12-12 20:27:31.546878] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:16:47.352 [2024-12-12 20:27:31.546887] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:47.352 [2024-12-12 20:27:31.546892] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:47.352 [2024-12-12 20:27:31.555515] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:47.352 [2024-12-12 20:27:31.555539] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:47.352 [2024-12-12 20:27:31.562437] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:47.352 [2024-12-12 20:27:31.562523] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:47.352 [2024-12-12 20:27:31.579435] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:47.611 0 00:16:47.611 20:27:31 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.611 20:27:31 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:16:47.611 20:27:31 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.611 20:27:31 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:47.869 20:27:31 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.869 20:27:31 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:16:47.869 "subsystems": [ 00:16:47.869 { 00:16:47.869 "subsystem": "fsdev", 00:16:47.869 "config": [ 00:16:47.869 { 00:16:47.869 "method": "fsdev_set_opts", 00:16:47.869 "params": { 00:16:47.869 "fsdev_io_pool_size": 65535, 00:16:47.869 "fsdev_io_cache_size": 256 00:16:47.869 } 00:16:47.869 } 00:16:47.869 ] 00:16:47.869 }, 00:16:47.869 { 00:16:47.869 "subsystem": "keyring", 00:16:47.869 "config": [] 00:16:47.869 }, 00:16:47.869 { 00:16:47.869 "subsystem": "iobuf", 00:16:47.869 "config": [ 00:16:47.869 { 00:16:47.869 "method": "iobuf_set_options", 00:16:47.869 "params": { 00:16:47.869 "small_pool_count": 8192, 00:16:47.869 "large_pool_count": 1024, 00:16:47.869 "small_bufsize": 8192, 00:16:47.869 "large_bufsize": 135168, 00:16:47.869 "enable_numa": false 00:16:47.869 } 00:16:47.869 } 00:16:47.869 ] 00:16:47.869 }, 00:16:47.869 { 00:16:47.869 "subsystem": "sock", 00:16:47.869 "config": [ 00:16:47.869 { 00:16:47.869 "method": "sock_set_default_impl", 00:16:47.869 "params": { 00:16:47.869 "impl_name": "posix" 00:16:47.869 } 00:16:47.869 }, 00:16:47.869 { 00:16:47.869 "method": "sock_impl_set_options", 00:16:47.869 "params": { 00:16:47.869 "impl_name": "ssl", 00:16:47.869 "recv_buf_size": 4096, 00:16:47.869 "send_buf_size": 4096, 00:16:47.869 "enable_recv_pipe": true, 00:16:47.870 "enable_quickack": false, 00:16:47.870 "enable_placement_id": 0, 00:16:47.870 "enable_zerocopy_send_server": true, 00:16:47.870 "enable_zerocopy_send_client": false, 00:16:47.870 "zerocopy_threshold": 0, 00:16:47.870 "tls_version": 0, 00:16:47.870 "enable_ktls": false 00:16:47.870 } 00:16:47.870 }, 00:16:47.870 { 00:16:47.870 "method": "sock_impl_set_options", 00:16:47.870 "params": { 00:16:47.870 "impl_name": "posix", 00:16:47.870 "recv_buf_size": 2097152, 00:16:47.870 "send_buf_size": 2097152, 00:16:47.870 "enable_recv_pipe": true, 00:16:47.870 "enable_quickack": false, 00:16:47.870 "enable_placement_id": 0, 00:16:47.870 "enable_zerocopy_send_server": true, 00:16:47.870 "enable_zerocopy_send_client": false, 00:16:47.870 "zerocopy_threshold": 0, 00:16:47.870 "tls_version": 0, 00:16:47.870 "enable_ktls": false 00:16:47.870 } 00:16:47.870 } 00:16:47.870 ] 00:16:47.870 }, 00:16:47.870 { 00:16:47.870 "subsystem": "vmd", 00:16:47.870 "config": [] 00:16:47.870 }, 00:16:47.870 { 00:16:47.870 "subsystem": "accel", 00:16:47.870 "config": [ 00:16:47.870 { 00:16:47.870 "method": "accel_set_options", 00:16:47.870 "params": { 00:16:47.870 "small_cache_size": 128, 00:16:47.870 "large_cache_size": 16, 00:16:47.870 "task_count": 2048, 00:16:47.870 "sequence_count": 2048, 00:16:47.870 "buf_count": 2048 00:16:47.870 } 00:16:47.870 } 00:16:47.870 ] 00:16:47.870 }, 00:16:47.870 { 00:16:47.870 "subsystem": "bdev", 00:16:47.870 "config": [ 00:16:47.870 { 00:16:47.870 "method": "bdev_set_options", 00:16:47.870 "params": { 00:16:47.870 "bdev_io_pool_size": 65535, 00:16:47.870 "bdev_io_cache_size": 256, 00:16:47.870 "bdev_auto_examine": true, 00:16:47.870 "iobuf_small_cache_size": 128, 00:16:47.870 "iobuf_large_cache_size": 16 00:16:47.870 } 00:16:47.870 }, 00:16:47.870 { 00:16:47.870 "method": "bdev_raid_set_options", 00:16:47.870 "params": { 00:16:47.870 "process_window_size_kb": 1024, 00:16:47.870 "process_max_bandwidth_mb_sec": 0 00:16:47.870 } 00:16:47.870 }, 00:16:47.870 { 00:16:47.870 "method": "bdev_iscsi_set_options", 00:16:47.870 "params": { 00:16:47.870 "timeout_sec": 30 00:16:47.870 } 00:16:47.870 }, 00:16:47.870 { 00:16:47.870 "method": "bdev_nvme_set_options", 00:16:47.870 "params": { 00:16:47.870 "action_on_timeout": "none", 00:16:47.870 "timeout_us": 0, 00:16:47.870 "timeout_admin_us": 0, 00:16:47.870 "keep_alive_timeout_ms": 10000, 00:16:47.870 "arbitration_burst": 0, 00:16:47.870 "low_priority_weight": 0, 00:16:47.870 "medium_priority_weight": 0, 00:16:47.870 "high_priority_weight": 0, 00:16:47.870 "nvme_adminq_poll_period_us": 10000, 00:16:47.870 "nvme_ioq_poll_period_us": 0, 00:16:47.870 "io_queue_requests": 0, 00:16:47.870 "delay_cmd_submit": true, 00:16:47.870 "transport_retry_count": 4, 00:16:47.870 "bdev_retry_count": 3, 00:16:47.870 "transport_ack_timeout": 0, 00:16:47.870 "ctrlr_loss_timeout_sec": 0, 00:16:47.870 "reconnect_delay_sec": 0, 00:16:47.870 "fast_io_fail_timeout_sec": 0, 00:16:47.870 "disable_auto_failback": false, 00:16:47.870 "generate_uuids": false, 00:16:47.870 "transport_tos": 0, 00:16:47.870 "nvme_error_stat": false, 00:16:47.870 "rdma_srq_size": 0, 00:16:47.870 "io_path_stat": false, 00:16:47.870 "allow_accel_sequence": false, 00:16:47.870 "rdma_max_cq_size": 0, 00:16:47.870 "rdma_cm_event_timeout_ms": 0, 00:16:47.870 "dhchap_digests": [ 00:16:47.870 "sha256", 00:16:47.870 "sha384", 00:16:47.870 "sha512" 00:16:47.870 ], 00:16:47.870 "dhchap_dhgroups": [ 00:16:47.870 "null", 00:16:47.870 "ffdhe2048", 00:16:47.870 "ffdhe3072", 00:16:47.870 "ffdhe4096", 00:16:47.870 "ffdhe6144", 00:16:47.870 "ffdhe8192" 00:16:47.870 ], 00:16:47.870 "rdma_umr_per_io": false 00:16:47.870 } 00:16:47.870 }, 00:16:47.870 { 00:16:47.870 "method": "bdev_nvme_set_hotplug", 00:16:47.870 "params": { 00:16:47.870 "period_us": 100000, 00:16:47.870 "enable": false 00:16:47.870 } 00:16:47.870 }, 00:16:47.870 { 00:16:47.870 "method": "bdev_malloc_create", 00:16:47.870 "params": { 00:16:47.870 "name": "malloc0", 00:16:47.870 "num_blocks": 8192, 00:16:47.870 "block_size": 4096, 00:16:47.870 "physical_block_size": 4096, 00:16:47.870 "uuid": "54f47202-f3b4-434a-b7ec-6271bb4a3876", 00:16:47.870 "optimal_io_boundary": 0, 00:16:47.870 "md_size": 0, 00:16:47.870 "dif_type": 0, 00:16:47.870 "dif_is_head_of_md": false, 00:16:47.870 "dif_pi_format": 0 00:16:47.870 } 00:16:47.870 }, 00:16:47.870 { 00:16:47.870 "method": "bdev_wait_for_examine" 00:16:47.870 } 00:16:47.870 ] 00:16:47.870 }, 00:16:47.870 { 00:16:47.870 "subsystem": "scsi", 00:16:47.870 "config": null 00:16:47.870 }, 00:16:47.870 { 00:16:47.870 "subsystem": "scheduler", 00:16:47.870 "config": [ 00:16:47.870 { 00:16:47.870 "method": "framework_set_scheduler", 00:16:47.870 "params": { 00:16:47.870 "name": "static" 00:16:47.870 } 00:16:47.870 } 00:16:47.870 ] 00:16:47.870 }, 00:16:47.870 { 00:16:47.870 "subsystem": "vhost_scsi", 00:16:47.870 "config": [] 00:16:47.870 }, 00:16:47.870 { 00:16:47.870 "subsystem": "vhost_blk", 00:16:47.870 "config": [] 00:16:47.870 }, 00:16:47.870 { 00:16:47.870 "subsystem": "ublk", 00:16:47.870 "config": [ 00:16:47.870 { 00:16:47.870 "method": "ublk_create_target", 00:16:47.870 "params": { 00:16:47.870 "cpumask": "1" 00:16:47.870 } 00:16:47.870 }, 00:16:47.870 { 00:16:47.870 "method": "ublk_start_disk", 00:16:47.870 "params": { 00:16:47.870 "bdev_name": "malloc0", 00:16:47.870 "ublk_id": 0, 00:16:47.870 "num_queues": 1, 00:16:47.870 "queue_depth": 128 00:16:47.870 } 00:16:47.870 } 00:16:47.870 ] 00:16:47.870 }, 00:16:47.870 { 00:16:47.870 "subsystem": "nbd", 00:16:47.870 "config": [] 00:16:47.870 }, 00:16:47.870 { 00:16:47.870 "subsystem": "nvmf", 00:16:47.870 "config": [ 00:16:47.870 { 00:16:47.870 "method": "nvmf_set_config", 00:16:47.870 "params": { 00:16:47.870 "discovery_filter": "match_any", 00:16:47.870 "admin_cmd_passthru": { 00:16:47.870 "identify_ctrlr": false 00:16:47.870 }, 00:16:47.870 "dhchap_digests": [ 00:16:47.870 "sha256", 00:16:47.870 "sha384", 00:16:47.870 "sha512" 00:16:47.870 ], 00:16:47.870 "dhchap_dhgroups": [ 00:16:47.870 "null", 00:16:47.870 "ffdhe2048", 00:16:47.870 "ffdhe3072", 00:16:47.870 "ffdhe4096", 00:16:47.870 "ffdhe6144", 00:16:47.870 "ffdhe8192" 00:16:47.870 ] 00:16:47.870 } 00:16:47.870 }, 00:16:47.870 { 00:16:47.870 "method": "nvmf_set_max_subsystems", 00:16:47.870 "params": { 00:16:47.870 "max_subsystems": 1024 00:16:47.870 } 00:16:47.870 }, 00:16:47.870 { 00:16:47.870 "method": "nvmf_set_crdt", 00:16:47.870 "params": { 00:16:47.870 "crdt1": 0, 00:16:47.870 "crdt2": 0, 00:16:47.870 "crdt3": 0 00:16:47.870 } 00:16:47.870 } 00:16:47.870 ] 00:16:47.870 }, 00:16:47.870 { 00:16:47.870 "subsystem": "iscsi", 00:16:47.870 "config": [ 00:16:47.870 { 00:16:47.870 "method": "iscsi_set_options", 00:16:47.870 "params": { 00:16:47.870 "node_base": "iqn.2016-06.io.spdk", 00:16:47.870 "max_sessions": 128, 00:16:47.870 "max_connections_per_session": 2, 00:16:47.870 "max_queue_depth": 64, 00:16:47.870 "default_time2wait": 2, 00:16:47.870 "default_time2retain": 20, 00:16:47.870 "first_burst_length": 8192, 00:16:47.870 "immediate_data": true, 00:16:47.870 "allow_duplicated_isid": false, 00:16:47.870 "error_recovery_level": 0, 00:16:47.870 "nop_timeout": 60, 00:16:47.870 "nop_in_interval": 30, 00:16:47.870 "disable_chap": false, 00:16:47.870 "require_chap": false, 00:16:47.870 "mutual_chap": false, 00:16:47.870 "chap_group": 0, 00:16:47.870 "max_large_datain_per_connection": 64, 00:16:47.870 "max_r2t_per_connection": 4, 00:16:47.870 "pdu_pool_size": 36864, 00:16:47.870 "immediate_data_pool_size": 16384, 00:16:47.870 "data_out_pool_size": 2048 00:16:47.870 } 00:16:47.870 } 00:16:47.870 ] 00:16:47.870 } 00:16:47.870 ] 00:16:47.870 }' 00:16:47.870 20:27:31 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 75348 00:16:47.870 20:27:31 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 75348 ']' 00:16:47.870 20:27:31 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 75348 00:16:47.870 20:27:31 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:16:47.870 20:27:31 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:47.870 20:27:31 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75348 00:16:47.870 killing process with pid 75348 00:16:47.870 20:27:31 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:47.870 20:27:31 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:47.870 20:27:31 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75348' 00:16:47.870 20:27:31 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 75348 00:16:47.870 20:27:31 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 75348 00:16:48.804 [2024-12-12 20:27:32.864359] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:48.804 [2024-12-12 20:27:32.903461] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:48.804 [2024-12-12 20:27:32.903618] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:48.804 [2024-12-12 20:27:32.914436] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:48.804 [2024-12-12 20:27:32.914503] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:48.804 [2024-12-12 20:27:32.914515] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:48.804 [2024-12-12 20:27:32.914538] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:48.804 [2024-12-12 20:27:32.914684] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:50.231 20:27:34 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=75403 00:16:50.231 20:27:34 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 75403 00:16:50.231 20:27:34 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 75403 ']' 00:16:50.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:50.231 20:27:34 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:50.231 20:27:34 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:50.231 20:27:34 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:50.231 20:27:34 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:50.231 20:27:34 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:50.231 20:27:34 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:16:50.231 20:27:34 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:16:50.231 "subsystems": [ 00:16:50.231 { 00:16:50.231 "subsystem": "fsdev", 00:16:50.231 "config": [ 00:16:50.231 { 00:16:50.231 "method": "fsdev_set_opts", 00:16:50.231 "params": { 00:16:50.231 "fsdev_io_pool_size": 65535, 00:16:50.231 "fsdev_io_cache_size": 256 00:16:50.231 } 00:16:50.231 } 00:16:50.231 ] 00:16:50.231 }, 00:16:50.231 { 00:16:50.231 "subsystem": "keyring", 00:16:50.231 "config": [] 00:16:50.231 }, 00:16:50.231 { 00:16:50.231 "subsystem": "iobuf", 00:16:50.231 "config": [ 00:16:50.231 { 00:16:50.231 "method": "iobuf_set_options", 00:16:50.231 "params": { 00:16:50.231 "small_pool_count": 8192, 00:16:50.231 "large_pool_count": 1024, 00:16:50.231 "small_bufsize": 8192, 00:16:50.231 "large_bufsize": 135168, 00:16:50.231 "enable_numa": false 00:16:50.231 } 00:16:50.231 } 00:16:50.231 ] 00:16:50.231 }, 00:16:50.231 { 00:16:50.231 "subsystem": "sock", 00:16:50.232 "config": [ 00:16:50.232 { 00:16:50.232 "method": "sock_set_default_impl", 00:16:50.232 "params": { 00:16:50.232 "impl_name": "posix" 00:16:50.232 } 00:16:50.232 }, 00:16:50.232 { 00:16:50.232 "method": "sock_impl_set_options", 00:16:50.232 "params": { 00:16:50.232 "impl_name": "ssl", 00:16:50.232 "recv_buf_size": 4096, 00:16:50.232 "send_buf_size": 4096, 00:16:50.232 "enable_recv_pipe": true, 00:16:50.232 "enable_quickack": false, 00:16:50.232 "enable_placement_id": 0, 00:16:50.232 "enable_zerocopy_send_server": true, 00:16:50.232 "enable_zerocopy_send_client": false, 00:16:50.232 "zerocopy_threshold": 0, 00:16:50.232 "tls_version": 0, 00:16:50.232 "enable_ktls": false 00:16:50.232 } 00:16:50.232 }, 00:16:50.232 { 00:16:50.232 "method": "sock_impl_set_options", 00:16:50.232 "params": { 00:16:50.232 "impl_name": "posix", 00:16:50.232 "recv_buf_size": 2097152, 00:16:50.232 "send_buf_size": 2097152, 00:16:50.232 "enable_recv_pipe": true, 00:16:50.232 "enable_quickack": false, 00:16:50.232 "enable_placement_id": 0, 00:16:50.232 "enable_zerocopy_send_server": true, 00:16:50.232 "enable_zerocopy_send_client": false, 00:16:50.232 "zerocopy_threshold": 0, 00:16:50.232 "tls_version": 0, 00:16:50.232 "enable_ktls": false 00:16:50.232 } 00:16:50.232 } 00:16:50.232 ] 00:16:50.232 }, 00:16:50.232 { 00:16:50.232 "subsystem": "vmd", 00:16:50.232 "config": [] 00:16:50.232 }, 00:16:50.232 { 00:16:50.232 "subsystem": "accel", 00:16:50.232 "config": [ 00:16:50.232 { 00:16:50.232 "method": "accel_set_options", 00:16:50.232 "params": { 00:16:50.232 "small_cache_size": 128, 00:16:50.232 "large_cache_size": 16, 00:16:50.232 "task_count": 2048, 00:16:50.232 "sequence_count": 2048, 00:16:50.232 "buf_count": 2048 00:16:50.232 } 00:16:50.232 } 00:16:50.232 ] 00:16:50.232 }, 00:16:50.232 { 00:16:50.232 "subsystem": "bdev", 00:16:50.232 "config": [ 00:16:50.232 { 00:16:50.232 "method": "bdev_set_options", 00:16:50.232 "params": { 00:16:50.232 "bdev_io_pool_size": 65535, 00:16:50.232 "bdev_io_cache_size": 256, 00:16:50.232 "bdev_auto_examine": true, 00:16:50.232 "iobuf_small_cache_size": 128, 00:16:50.232 "iobuf_large_cache_size": 16 00:16:50.232 } 00:16:50.232 }, 00:16:50.232 { 00:16:50.232 "method": "bdev_raid_set_options", 00:16:50.232 "params": { 00:16:50.232 "process_window_size_kb": 1024, 00:16:50.232 "process_max_bandwidth_mb_sec": 0 00:16:50.232 } 00:16:50.232 }, 00:16:50.232 { 00:16:50.232 "method": "bdev_iscsi_set_options", 00:16:50.232 "params": { 00:16:50.232 "timeout_sec": 30 00:16:50.232 } 00:16:50.232 }, 00:16:50.232 { 00:16:50.232 "method": "bdev_nvme_set_options", 00:16:50.232 "params": { 00:16:50.232 "action_on_timeout": "none", 00:16:50.232 "timeout_us": 0, 00:16:50.232 "timeout_admin_us": 0, 00:16:50.232 "keep_alive_timeout_ms": 10000, 00:16:50.232 "arbitration_burst": 0, 00:16:50.232 "low_priority_weight": 0, 00:16:50.232 "medium_priority_weight": 0, 00:16:50.232 "high_priority_weight": 0, 00:16:50.232 "nvme_adminq_poll_period_us": 10000, 00:16:50.232 "nvme_ioq_poll_period_us": 0, 00:16:50.232 "io_queue_requests": 0, 00:16:50.232 "delay_cmd_submit": true, 00:16:50.232 "transport_retry_count": 4, 00:16:50.232 "bdev_retry_count": 3, 00:16:50.232 "transport_ack_timeout": 0, 00:16:50.232 "ctrlr_loss_timeout_sec": 0, 00:16:50.232 "reconnect_delay_sec": 0, 00:16:50.232 "fast_io_fail_timeout_sec": 0, 00:16:50.232 "disable_auto_failback": false, 00:16:50.232 "generate_uuids": false, 00:16:50.232 "transport_tos": 0, 00:16:50.232 "nvme_error_stat": false, 00:16:50.232 "rdma_srq_size": 0, 00:16:50.232 "io_path_stat": false, 00:16:50.232 "allow_accel_sequence": false, 00:16:50.232 "rdma_max_cq_size": 0, 00:16:50.232 "rdma_cm_event_timeout_ms": 0, 00:16:50.232 "dhchap_digests": [ 00:16:50.232 "sha256", 00:16:50.232 "sha384", 00:16:50.232 "sha512" 00:16:50.232 ], 00:16:50.232 "dhchap_dhgroups": [ 00:16:50.232 "null", 00:16:50.232 "ffdhe2048", 00:16:50.232 "ffdhe3072", 00:16:50.232 "ffdhe4096", 00:16:50.232 "ffdhe6144", 00:16:50.232 "ffdhe8192" 00:16:50.232 ], 00:16:50.232 "rdma_umr_per_io": false 00:16:50.232 } 00:16:50.232 }, 00:16:50.232 { 00:16:50.232 "method": "bdev_nvme_set_hotplug", 00:16:50.232 "params": { 00:16:50.232 "period_us": 100000, 00:16:50.232 "enable": false 00:16:50.232 } 00:16:50.232 }, 00:16:50.232 { 00:16:50.232 "method": "bdev_malloc_create", 00:16:50.232 "params": { 00:16:50.232 "name": "malloc0", 00:16:50.232 "num_blocks": 8192, 00:16:50.232 "block_size": 4096, 00:16:50.232 "physical_block_size": 4096, 00:16:50.232 "uuid": "54f47202-f3b4-434a-b7ec-6271bb4a3876", 00:16:50.232 "optimal_io_boundary": 0, 00:16:50.232 "md_size": 0, 00:16:50.232 "dif_type": 0, 00:16:50.232 "dif_is_head_of_md": false, 00:16:50.232 "dif_pi_format": 0 00:16:50.232 } 00:16:50.232 }, 00:16:50.232 { 00:16:50.232 "method": "bdev_wait_for_examine" 00:16:50.232 } 00:16:50.232 ] 00:16:50.232 }, 00:16:50.232 { 00:16:50.232 "subsystem": "scsi", 00:16:50.232 "config": null 00:16:50.232 }, 00:16:50.232 { 00:16:50.232 "subsystem": "scheduler", 00:16:50.232 "config": [ 00:16:50.232 { 00:16:50.232 "method": "framework_set_scheduler", 00:16:50.232 "params": { 00:16:50.232 "name": "static" 00:16:50.232 } 00:16:50.232 } 00:16:50.232 ] 00:16:50.232 }, 00:16:50.232 { 00:16:50.232 "subsystem": "vhost_scsi", 00:16:50.232 "config": [] 00:16:50.232 }, 00:16:50.232 { 00:16:50.232 "subsystem": "vhost_blk", 00:16:50.232 "config": [] 00:16:50.232 }, 00:16:50.232 { 00:16:50.232 "subsystem": "ublk", 00:16:50.232 "config": [ 00:16:50.232 { 00:16:50.232 "method": "ublk_create_target", 00:16:50.232 "params": { 00:16:50.232 "cpumask": "1" 00:16:50.232 } 00:16:50.232 }, 00:16:50.232 { 00:16:50.232 "method": "ublk_start_disk", 00:16:50.232 "params": { 00:16:50.232 "bdev_name": "malloc0", 00:16:50.232 "ublk_id": 0, 00:16:50.232 "num_queues": 1, 00:16:50.232 "queue_depth": 128 00:16:50.232 } 00:16:50.232 } 00:16:50.232 ] 00:16:50.232 }, 00:16:50.232 { 00:16:50.232 "subsystem": "nbd", 00:16:50.232 "config": [] 00:16:50.232 }, 00:16:50.232 { 00:16:50.232 "subsystem": "nvmf", 00:16:50.232 "config": [ 00:16:50.232 { 00:16:50.232 "method": "nvmf_set_config", 00:16:50.232 "params": { 00:16:50.232 "discovery_filter": "match_any", 00:16:50.232 "admin_cmd_passthru": { 00:16:50.232 "identify_ctrlr": false 00:16:50.232 }, 00:16:50.232 "dhchap_digests": [ 00:16:50.232 "sha256", 00:16:50.232 "sha384", 00:16:50.232 "sha512" 00:16:50.232 ], 00:16:50.232 "dhchap_dhgroups": [ 00:16:50.232 "null", 00:16:50.232 "ffdhe2048", 00:16:50.232 "ffdhe3072", 00:16:50.232 "ffdhe4096", 00:16:50.232 "ffdhe6144", 00:16:50.232 "ffdhe8192" 00:16:50.232 ] 00:16:50.232 } 00:16:50.232 }, 00:16:50.232 { 00:16:50.232 "method": "nvmf_set_max_subsystems", 00:16:50.232 "params": { 00:16:50.232 "max_subsystems": 1024 00:16:50.232 } 00:16:50.232 }, 00:16:50.232 { 00:16:50.232 "method": "nvmf_set_crdt", 00:16:50.232 "params": { 00:16:50.232 "crdt1": 0, 00:16:50.232 "crdt2": 0, 00:16:50.232 "crdt3": 0 00:16:50.232 } 00:16:50.232 } 00:16:50.232 ] 00:16:50.232 }, 00:16:50.232 { 00:16:50.232 "subsystem": "iscsi", 00:16:50.232 "config": [ 00:16:50.232 { 00:16:50.232 "method": "iscsi_set_options", 00:16:50.232 "params": { 00:16:50.232 "node_base": "iqn.2016-06.io.spdk", 00:16:50.232 "max_sessions": 128, 00:16:50.232 "max_connections_per_session": 2, 00:16:50.232 "max_queue_depth": 64, 00:16:50.232 "default_time2wait": 2, 00:16:50.232 "default_time2retain": 20, 00:16:50.232 "first_burst_length": 8192, 00:16:50.232 "immediate_data": true, 00:16:50.232 "allow_duplicated_isid": false, 00:16:50.232 "error_recovery_level": 0, 00:16:50.232 "nop_timeout": 60, 00:16:50.232 "nop_in_interval": 30, 00:16:50.232 "disable_chap": false, 00:16:50.232 "require_chap": false, 00:16:50.232 "mutual_chap": false, 00:16:50.232 "chap_group": 0, 00:16:50.232 "max_large_datain_per_connection": 64, 00:16:50.232 "max_r2t_per_connection": 4, 00:16:50.232 "pdu_pool_size": 36864, 00:16:50.232 "immediate_data_pool_size": 16384, 00:16:50.232 "data_out_pool_size": 2048 00:16:50.232 } 00:16:50.232 } 00:16:50.232 ] 00:16:50.232 } 00:16:50.232 ] 00:16:50.232 }' 00:16:50.232 [2024-12-12 20:27:34.350871] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:16:50.232 [2024-12-12 20:27:34.350994] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75403 ] 00:16:50.491 [2024-12-12 20:27:34.504614] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:50.491 [2024-12-12 20:27:34.586621] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:51.056 [2024-12-12 20:27:35.241431] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:51.056 [2024-12-12 20:27:35.242090] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:51.056 [2024-12-12 20:27:35.249519] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:16:51.056 [2024-12-12 20:27:35.249581] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:16:51.056 [2024-12-12 20:27:35.249588] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:51.056 [2024-12-12 20:27:35.249594] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:51.056 [2024-12-12 20:27:35.258497] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:51.056 [2024-12-12 20:27:35.258514] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:51.056 [2024-12-12 20:27:35.265434] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:51.056 [2024-12-12 20:27:35.265514] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:51.056 [2024-12-12 20:27:35.282433] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:51.314 20:27:35 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:51.314 20:27:35 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:16:51.314 20:27:35 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:16:51.314 20:27:35 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:16:51.314 20:27:35 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.314 20:27:35 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:51.314 20:27:35 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.314 20:27:35 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:51.314 20:27:35 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:16:51.314 20:27:35 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 75403 00:16:51.314 20:27:35 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 75403 ']' 00:16:51.314 20:27:35 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 75403 00:16:51.314 20:27:35 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:16:51.314 20:27:35 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:51.314 20:27:35 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75403 00:16:51.314 20:27:35 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:51.314 killing process with pid 75403 00:16:51.314 20:27:35 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:51.314 20:27:35 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75403' 00:16:51.314 20:27:35 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 75403 00:16:51.314 20:27:35 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 75403 00:16:52.247 [2024-12-12 20:27:36.382525] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:52.247 [2024-12-12 20:27:36.412499] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:52.247 [2024-12-12 20:27:36.412604] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:52.247 [2024-12-12 20:27:36.421445] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:52.247 [2024-12-12 20:27:36.421486] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:52.247 [2024-12-12 20:27:36.421493] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:52.247 [2024-12-12 20:27:36.421512] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:52.247 [2024-12-12 20:27:36.421633] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:53.620 20:27:37 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:16:53.620 00:16:53.620 real 0m7.024s 00:16:53.620 user 0m4.888s 00:16:53.620 sys 0m2.749s 00:16:53.620 20:27:37 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:53.620 20:27:37 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:53.620 ************************************ 00:16:53.620 END TEST test_save_ublk_config 00:16:53.620 ************************************ 00:16:53.620 20:27:37 ublk -- ublk/ublk.sh@139 -- # spdk_pid=75470 00:16:53.620 20:27:37 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:53.620 20:27:37 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:53.620 20:27:37 ublk -- ublk/ublk.sh@141 -- # waitforlisten 75470 00:16:53.620 20:27:37 ublk -- common/autotest_common.sh@835 -- # '[' -z 75470 ']' 00:16:53.620 20:27:37 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:53.620 20:27:37 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:53.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:53.620 20:27:37 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:53.620 20:27:37 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:53.620 20:27:37 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:53.620 [2024-12-12 20:27:37.720952] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:16:53.620 [2024-12-12 20:27:37.721063] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75470 ] 00:16:53.878 [2024-12-12 20:27:37.880852] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:53.878 [2024-12-12 20:27:37.966299] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:16:53.878 [2024-12-12 20:27:37.966474] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:54.491 20:27:38 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:54.491 20:27:38 ublk -- common/autotest_common.sh@868 -- # return 0 00:16:54.491 20:27:38 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:16:54.491 20:27:38 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:54.491 20:27:38 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:54.491 20:27:38 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:54.491 ************************************ 00:16:54.491 START TEST test_create_ublk 00:16:54.491 ************************************ 00:16:54.491 20:27:38 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:16:54.491 20:27:38 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:16:54.491 20:27:38 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.491 20:27:38 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:54.491 [2024-12-12 20:27:38.572431] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:54.491 [2024-12-12 20:27:38.574056] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:54.491 20:27:38 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.491 20:27:38 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:16:54.491 20:27:38 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:16:54.491 20:27:38 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.491 20:27:38 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:54.772 20:27:38 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.772 20:27:38 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:16:54.772 20:27:38 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:16:54.772 20:27:38 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.772 20:27:38 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:54.772 [2024-12-12 20:27:38.747547] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:16:54.772 [2024-12-12 20:27:38.747860] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:16:54.772 [2024-12-12 20:27:38.747874] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:54.772 [2024-12-12 20:27:38.747881] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:54.772 [2024-12-12 20:27:38.756619] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:54.772 [2024-12-12 20:27:38.756638] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:54.772 [2024-12-12 20:27:38.763433] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:54.772 [2024-12-12 20:27:38.763951] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:54.772 [2024-12-12 20:27:38.778440] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:54.772 20:27:38 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.772 20:27:38 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:16:54.772 20:27:38 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:16:54.772 20:27:38 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:16:54.772 20:27:38 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.772 20:27:38 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:54.772 20:27:38 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.772 20:27:38 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:16:54.772 { 00:16:54.772 "ublk_device": "/dev/ublkb0", 00:16:54.772 "id": 0, 00:16:54.772 "queue_depth": 512, 00:16:54.772 "num_queues": 4, 00:16:54.772 "bdev_name": "Malloc0" 00:16:54.772 } 00:16:54.772 ]' 00:16:54.772 20:27:38 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:16:54.772 20:27:38 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:54.772 20:27:38 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:16:54.772 20:27:38 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:16:54.772 20:27:38 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:16:54.772 20:27:38 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:16:54.772 20:27:38 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:16:54.772 20:27:38 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:16:54.772 20:27:38 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:16:54.772 20:27:38 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:16:54.772 20:27:38 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:16:54.772 20:27:38 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:16:54.772 20:27:38 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:16:54.772 20:27:38 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:16:54.772 20:27:38 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:16:54.772 20:27:38 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:16:54.772 20:27:38 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:16:54.772 20:27:38 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:16:54.772 20:27:38 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:16:54.772 20:27:38 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:16:54.772 20:27:38 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:16:54.772 20:27:38 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:16:55.030 fio: verification read phase will never start because write phase uses all of runtime 00:16:55.030 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:16:55.030 fio-3.35 00:16:55.030 Starting 1 process 00:17:04.991 00:17:04.991 fio_test: (groupid=0, jobs=1): err= 0: pid=75509: Thu Dec 12 20:27:49 2024 00:17:04.991 write: IOPS=18.0k, BW=70.3MiB/s (73.8MB/s)(703MiB/10001msec); 0 zone resets 00:17:04.991 clat (usec): min=35, max=8017, avg=54.68, stdev=114.90 00:17:04.991 lat (usec): min=36, max=8018, avg=55.17, stdev=114.92 00:17:04.991 clat percentiles (usec): 00:17:04.991 | 1.00th=[ 40], 5.00th=[ 42], 10.00th=[ 44], 20.00th=[ 46], 00:17:04.991 | 30.00th=[ 48], 40.00th=[ 49], 50.00th=[ 50], 60.00th=[ 51], 00:17:04.991 | 70.00th=[ 52], 80.00th=[ 54], 90.00th=[ 58], 95.00th=[ 62], 00:17:04.991 | 99.00th=[ 72], 99.50th=[ 82], 99.90th=[ 2343], 99.95th=[ 3326], 00:17:04.991 | 99.99th=[ 3916] 00:17:04.991 bw ( KiB/s): min=31040, max=79520, per=99.69%, avg=71810.95, stdev=10599.75, samples=19 00:17:04.991 iops : min= 7760, max=19880, avg=17952.74, stdev=2649.94, samples=19 00:17:04.991 lat (usec) : 50=55.88%, 100=43.75%, 250=0.15%, 500=0.03%, 750=0.01% 00:17:04.991 lat (usec) : 1000=0.02% 00:17:04.991 lat (msec) : 2=0.05%, 4=0.11%, 10=0.01% 00:17:04.991 cpu : usr=3.32%, sys=14.56%, ctx=180126, majf=0, minf=797 00:17:04.991 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:04.991 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:04.991 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:04.991 issued rwts: total=0,180093,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:04.991 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:04.992 00:17:04.992 Run status group 0 (all jobs): 00:17:04.992 WRITE: bw=70.3MiB/s (73.8MB/s), 70.3MiB/s-70.3MiB/s (73.8MB/s-73.8MB/s), io=703MiB (738MB), run=10001-10001msec 00:17:04.992 00:17:04.992 Disk stats (read/write): 00:17:04.992 ublkb0: ios=0/178064, merge=0/0, ticks=0/8209, in_queue=8210, util=99.08% 00:17:04.992 20:27:49 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:17:04.992 20:27:49 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.992 20:27:49 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:04.992 [2024-12-12 20:27:49.196388] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:17:05.250 [2024-12-12 20:27:49.237889] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:05.250 [2024-12-12 20:27:49.238772] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:17:05.250 [2024-12-12 20:27:49.243435] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:05.250 [2024-12-12 20:27:49.243663] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:17:05.250 [2024-12-12 20:27:49.243677] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:17:05.250 20:27:49 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.250 20:27:49 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:17:05.250 20:27:49 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:17:05.250 20:27:49 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:17:05.250 20:27:49 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:05.250 20:27:49 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:05.250 20:27:49 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:05.250 20:27:49 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:05.250 20:27:49 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:17:05.250 20:27:49 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.250 20:27:49 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:05.250 [2024-12-12 20:27:49.259494] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:17:05.250 request: 00:17:05.250 { 00:17:05.250 "ublk_id": 0, 00:17:05.250 "method": "ublk_stop_disk", 00:17:05.250 "req_id": 1 00:17:05.250 } 00:17:05.250 Got JSON-RPC error response 00:17:05.250 response: 00:17:05.250 { 00:17:05.250 "code": -19, 00:17:05.250 "message": "No such device" 00:17:05.250 } 00:17:05.250 20:27:49 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:05.250 20:27:49 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:17:05.250 20:27:49 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:05.250 20:27:49 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:05.250 20:27:49 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:05.250 20:27:49 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:17:05.250 20:27:49 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.250 20:27:49 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:05.250 [2024-12-12 20:27:49.275495] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:05.250 [2024-12-12 20:27:49.279112] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:05.250 [2024-12-12 20:27:49.279148] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:17:05.250 20:27:49 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.250 20:27:49 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:17:05.250 20:27:49 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.250 20:27:49 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:05.508 20:27:49 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.508 20:27:49 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:17:05.508 20:27:49 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:17:05.508 20:27:49 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.508 20:27:49 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:05.508 20:27:49 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.508 20:27:49 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:17:05.508 20:27:49 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:17:05.508 20:27:49 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:17:05.508 20:27:49 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:17:05.508 20:27:49 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.508 20:27:49 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:05.508 20:27:49 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.508 20:27:49 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:17:05.508 20:27:49 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:17:05.765 20:27:49 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:17:05.765 00:17:05.765 real 0m11.180s 00:17:05.765 user 0m0.652s 00:17:05.765 sys 0m1.524s 00:17:05.765 20:27:49 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:05.765 20:27:49 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:05.765 ************************************ 00:17:05.765 END TEST test_create_ublk 00:17:05.765 ************************************ 00:17:05.765 20:27:49 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:17:05.765 20:27:49 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:05.765 20:27:49 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:05.765 20:27:49 ublk -- common/autotest_common.sh@10 -- # set +x 00:17:05.765 ************************************ 00:17:05.765 START TEST test_create_multi_ublk 00:17:05.765 ************************************ 00:17:05.765 20:27:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:17:05.765 20:27:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:17:05.765 20:27:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.765 20:27:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:05.765 [2024-12-12 20:27:49.789427] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:05.765 [2024-12-12 20:27:49.790989] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:05.765 20:27:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.765 20:27:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:17:05.765 20:27:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:17:05.766 20:27:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:05.766 20:27:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:17:05.766 20:27:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.766 20:27:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:06.024 20:27:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.024 20:27:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:17:06.024 20:27:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:17:06.024 20:27:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.024 20:27:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:06.024 [2024-12-12 20:27:50.013552] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:17:06.024 [2024-12-12 20:27:50.013864] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:17:06.024 [2024-12-12 20:27:50.013876] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:17:06.024 [2024-12-12 20:27:50.013885] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:17:06.024 [2024-12-12 20:27:50.025477] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:06.024 [2024-12-12 20:27:50.025500] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:06.024 [2024-12-12 20:27:50.037433] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:06.024 [2024-12-12 20:27:50.037998] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:17:06.024 [2024-12-12 20:27:50.072442] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:17:06.024 20:27:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.024 20:27:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:17:06.024 20:27:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:06.024 20:27:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:17:06.024 20:27:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.024 20:27:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:06.283 20:27:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.283 20:27:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:17:06.283 20:27:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:17:06.283 20:27:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.283 20:27:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:06.283 [2024-12-12 20:27:50.292535] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:17:06.283 [2024-12-12 20:27:50.292836] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:17:06.283 [2024-12-12 20:27:50.292851] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:17:06.283 [2024-12-12 20:27:50.292856] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:17:06.283 [2024-12-12 20:27:50.300452] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:06.283 [2024-12-12 20:27:50.300470] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:06.283 [2024-12-12 20:27:50.308437] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:06.283 [2024-12-12 20:27:50.308946] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:17:06.283 [2024-12-12 20:27:50.325445] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:17:06.283 20:27:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.283 20:27:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:17:06.283 20:27:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:06.283 20:27:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:17:06.283 20:27:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.283 20:27:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:06.283 20:27:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.283 20:27:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:17:06.283 20:27:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:17:06.283 20:27:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.283 20:27:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:06.283 [2024-12-12 20:27:50.492523] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:17:06.283 [2024-12-12 20:27:50.492834] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:17:06.283 [2024-12-12 20:27:50.492846] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:17:06.283 [2024-12-12 20:27:50.492853] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:17:06.283 [2024-12-12 20:27:50.500444] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:06.283 [2024-12-12 20:27:50.500465] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:06.283 [2024-12-12 20:27:50.508433] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:06.283 [2024-12-12 20:27:50.508947] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:17:06.587 [2024-12-12 20:27:50.517457] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:17:06.587 20:27:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.587 20:27:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:17:06.587 20:27:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:06.587 20:27:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:17:06.587 20:27:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.587 20:27:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:06.587 20:27:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.587 20:27:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:17:06.587 20:27:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:17:06.587 20:27:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.587 20:27:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:06.587 [2024-12-12 20:27:50.676542] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:17:06.587 [2024-12-12 20:27:50.676842] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:17:06.587 [2024-12-12 20:27:50.676854] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:17:06.587 [2024-12-12 20:27:50.676860] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:17:06.587 [2024-12-12 20:27:50.684453] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:06.587 [2024-12-12 20:27:50.684471] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:06.587 [2024-12-12 20:27:50.692444] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:06.587 [2024-12-12 20:27:50.692956] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:17:06.587 [2024-12-12 20:27:50.701488] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:17:06.587 20:27:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.587 20:27:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:17:06.587 20:27:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:17:06.587 20:27:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.587 20:27:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:06.587 20:27:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.587 20:27:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:17:06.587 { 00:17:06.587 "ublk_device": "/dev/ublkb0", 00:17:06.587 "id": 0, 00:17:06.587 "queue_depth": 512, 00:17:06.587 "num_queues": 4, 00:17:06.587 "bdev_name": "Malloc0" 00:17:06.587 }, 00:17:06.587 { 00:17:06.587 "ublk_device": "/dev/ublkb1", 00:17:06.587 "id": 1, 00:17:06.588 "queue_depth": 512, 00:17:06.588 "num_queues": 4, 00:17:06.588 "bdev_name": "Malloc1" 00:17:06.588 }, 00:17:06.588 { 00:17:06.588 "ublk_device": "/dev/ublkb2", 00:17:06.588 "id": 2, 00:17:06.588 "queue_depth": 512, 00:17:06.588 "num_queues": 4, 00:17:06.588 "bdev_name": "Malloc2" 00:17:06.588 }, 00:17:06.588 { 00:17:06.588 "ublk_device": "/dev/ublkb3", 00:17:06.588 "id": 3, 00:17:06.588 "queue_depth": 512, 00:17:06.588 "num_queues": 4, 00:17:06.588 "bdev_name": "Malloc3" 00:17:06.588 } 00:17:06.588 ]' 00:17:06.588 20:27:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:17:06.588 20:27:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:06.588 20:27:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:17:06.588 20:27:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:17:06.588 20:27:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:17:06.588 20:27:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:17:06.588 20:27:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:17:06.858 20:27:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:17:06.858 20:27:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:17:06.858 20:27:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:17:06.858 20:27:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:17:06.858 20:27:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:17:06.858 20:27:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:06.858 20:27:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:17:06.858 20:27:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:17:06.858 20:27:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:17:06.858 20:27:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:17:06.858 20:27:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:17:06.858 20:27:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:17:06.858 20:27:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:17:06.858 20:27:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:17:06.858 20:27:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:17:06.858 20:27:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:17:06.858 20:27:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:06.858 20:27:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:17:07.116 20:27:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:17:07.116 20:27:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:17:07.116 20:27:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:17:07.116 20:27:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:17:07.116 20:27:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:17:07.116 20:27:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:17:07.116 20:27:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:17:07.116 20:27:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:17:07.116 20:27:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:17:07.116 20:27:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:07.116 20:27:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:17:07.116 20:27:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:17:07.116 20:27:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:17:07.116 20:27:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:17:07.116 20:27:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:17:07.116 20:27:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:17:07.116 20:27:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:17:07.373 20:27:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:17:07.373 20:27:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:17:07.373 20:27:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:17:07.373 20:27:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:17:07.374 20:27:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:17:07.374 20:27:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:07.374 20:27:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:17:07.374 20:27:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.374 20:27:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:07.374 [2024-12-12 20:27:51.396514] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:17:07.374 [2024-12-12 20:27:51.437917] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:07.374 [2024-12-12 20:27:51.438884] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:17:07.374 [2024-12-12 20:27:51.444436] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:07.374 [2024-12-12 20:27:51.444665] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:17:07.374 [2024-12-12 20:27:51.444679] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:17:07.374 20:27:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.374 20:27:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:07.374 20:27:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:17:07.374 20:27:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.374 20:27:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:07.374 [2024-12-12 20:27:51.460519] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:17:07.374 [2024-12-12 20:27:51.491789] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:07.374 [2024-12-12 20:27:51.492845] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:17:07.374 [2024-12-12 20:27:51.500438] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:07.374 [2024-12-12 20:27:51.500670] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:17:07.374 [2024-12-12 20:27:51.500683] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:17:07.374 20:27:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.374 20:27:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:07.374 20:27:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:17:07.374 20:27:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.374 20:27:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:07.374 [2024-12-12 20:27:51.516510] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:17:07.374 [2024-12-12 20:27:51.556471] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:07.374 [2024-12-12 20:27:51.557096] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:17:07.374 [2024-12-12 20:27:51.565458] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:07.374 [2024-12-12 20:27:51.565687] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:17:07.374 [2024-12-12 20:27:51.565705] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:17:07.374 20:27:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.374 20:27:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:07.374 20:27:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:17:07.374 20:27:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.374 20:27:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:07.374 [2024-12-12 20:27:51.580506] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:17:07.631 [2024-12-12 20:27:51.612465] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:07.631 [2024-12-12 20:27:51.613060] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:17:07.631 [2024-12-12 20:27:51.621478] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:07.631 [2024-12-12 20:27:51.621695] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:17:07.631 [2024-12-12 20:27:51.621709] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:17:07.631 20:27:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.631 20:27:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:17:07.631 [2024-12-12 20:27:51.812499] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:07.631 [2024-12-12 20:27:51.816133] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:07.631 [2024-12-12 20:27:51.816165] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:17:07.631 20:27:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:17:07.632 20:27:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:07.632 20:27:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:17:07.632 20:27:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.632 20:27:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:08.196 20:27:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.197 20:27:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:08.197 20:27:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:17:08.197 20:27:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.197 20:27:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:08.454 20:27:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.454 20:27:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:08.454 20:27:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:17:08.454 20:27:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.454 20:27:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:08.711 20:27:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.711 20:27:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:08.711 20:27:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:17:08.711 20:27:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.711 20:27:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:08.969 20:27:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.969 20:27:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:17:08.969 20:27:52 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:17:08.969 20:27:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.969 20:27:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:08.969 20:27:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.969 20:27:52 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:17:08.969 20:27:52 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:17:08.969 20:27:52 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:17:08.969 20:27:52 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:17:08.969 20:27:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.969 20:27:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:08.969 20:27:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.969 20:27:53 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:17:08.969 20:27:53 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:17:08.969 20:27:53 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:17:08.969 00:17:08.969 real 0m3.256s 00:17:08.969 user 0m0.848s 00:17:08.969 sys 0m0.133s 00:17:08.969 20:27:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:08.969 ************************************ 00:17:08.969 END TEST test_create_multi_ublk 00:17:08.969 20:27:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:08.969 ************************************ 00:17:08.969 20:27:53 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:17:08.969 20:27:53 ublk -- ublk/ublk.sh@147 -- # cleanup 00:17:08.969 20:27:53 ublk -- ublk/ublk.sh@130 -- # killprocess 75470 00:17:08.969 20:27:53 ublk -- common/autotest_common.sh@954 -- # '[' -z 75470 ']' 00:17:08.969 20:27:53 ublk -- common/autotest_common.sh@958 -- # kill -0 75470 00:17:08.969 20:27:53 ublk -- common/autotest_common.sh@959 -- # uname 00:17:08.969 20:27:53 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:08.969 20:27:53 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75470 00:17:08.970 20:27:53 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:08.970 killing process with pid 75470 00:17:08.970 20:27:53 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:08.970 20:27:53 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75470' 00:17:08.970 20:27:53 ublk -- common/autotest_common.sh@973 -- # kill 75470 00:17:08.970 20:27:53 ublk -- common/autotest_common.sh@978 -- # wait 75470 00:17:09.535 [2024-12-12 20:27:53.634112] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:09.535 [2024-12-12 20:27:53.634164] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:10.107 ************************************ 00:17:10.107 END TEST ublk 00:17:10.107 ************************************ 00:17:10.107 00:17:10.107 real 0m23.855s 00:17:10.107 user 0m34.933s 00:17:10.107 sys 0m9.069s 00:17:10.107 20:27:54 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:10.107 20:27:54 ublk -- common/autotest_common.sh@10 -- # set +x 00:17:10.107 20:27:54 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:17:10.107 20:27:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:10.107 20:27:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:10.107 20:27:54 -- common/autotest_common.sh@10 -- # set +x 00:17:10.107 ************************************ 00:17:10.107 START TEST ublk_recovery 00:17:10.107 ************************************ 00:17:10.107 20:27:54 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:17:10.365 * Looking for test storage... 00:17:10.365 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:17:10.365 20:27:54 ublk_recovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:10.365 20:27:54 ublk_recovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:10.365 20:27:54 ublk_recovery -- common/autotest_common.sh@1711 -- # lcov --version 00:17:10.365 20:27:54 ublk_recovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:10.365 20:27:54 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:10.365 20:27:54 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:10.365 20:27:54 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:10.365 20:27:54 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:17:10.365 20:27:54 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:17:10.365 20:27:54 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:17:10.365 20:27:54 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:17:10.365 20:27:54 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:17:10.365 20:27:54 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:17:10.365 20:27:54 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:17:10.365 20:27:54 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:10.365 20:27:54 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:17:10.365 20:27:54 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:17:10.365 20:27:54 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:10.365 20:27:54 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:10.365 20:27:54 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:17:10.365 20:27:54 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:17:10.365 20:27:54 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:10.365 20:27:54 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:17:10.365 20:27:54 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:17:10.365 20:27:54 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:17:10.365 20:27:54 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:17:10.365 20:27:54 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:10.365 20:27:54 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:17:10.365 20:27:54 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:17:10.365 20:27:54 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:10.365 20:27:54 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:10.365 20:27:54 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:17:10.365 20:27:54 ublk_recovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:10.366 20:27:54 ublk_recovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:10.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.366 --rc genhtml_branch_coverage=1 00:17:10.366 --rc genhtml_function_coverage=1 00:17:10.366 --rc genhtml_legend=1 00:17:10.366 --rc geninfo_all_blocks=1 00:17:10.366 --rc geninfo_unexecuted_blocks=1 00:17:10.366 00:17:10.366 ' 00:17:10.366 20:27:54 ublk_recovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:10.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.366 --rc genhtml_branch_coverage=1 00:17:10.366 --rc genhtml_function_coverage=1 00:17:10.366 --rc genhtml_legend=1 00:17:10.366 --rc geninfo_all_blocks=1 00:17:10.366 --rc geninfo_unexecuted_blocks=1 00:17:10.366 00:17:10.366 ' 00:17:10.366 20:27:54 ublk_recovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:10.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.366 --rc genhtml_branch_coverage=1 00:17:10.366 --rc genhtml_function_coverage=1 00:17:10.366 --rc genhtml_legend=1 00:17:10.366 --rc geninfo_all_blocks=1 00:17:10.366 --rc geninfo_unexecuted_blocks=1 00:17:10.366 00:17:10.366 ' 00:17:10.366 20:27:54 ublk_recovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:10.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.366 --rc genhtml_branch_coverage=1 00:17:10.366 --rc genhtml_function_coverage=1 00:17:10.366 --rc genhtml_legend=1 00:17:10.366 --rc geninfo_all_blocks=1 00:17:10.366 --rc geninfo_unexecuted_blocks=1 00:17:10.366 00:17:10.366 ' 00:17:10.366 20:27:54 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:17:10.366 20:27:54 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:17:10.366 20:27:54 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:17:10.366 20:27:54 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:17:10.366 20:27:54 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:17:10.366 20:27:54 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:17:10.366 20:27:54 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:17:10.366 20:27:54 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:17:10.366 20:27:54 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:17:10.366 20:27:54 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:17:10.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:10.366 20:27:54 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=75863 00:17:10.366 20:27:54 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:10.366 20:27:54 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 75863 00:17:10.366 20:27:54 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:17:10.366 20:27:54 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 75863 ']' 00:17:10.366 20:27:54 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:10.366 20:27:54 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:10.366 20:27:54 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:10.366 20:27:54 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:10.366 20:27:54 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:10.366 [2024-12-12 20:27:54.523955] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:17:10.366 [2024-12-12 20:27:54.524074] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75863 ] 00:17:10.625 [2024-12-12 20:27:54.680959] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:10.625 [2024-12-12 20:27:54.764290] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:10.625 [2024-12-12 20:27:54.764380] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:11.190 20:27:55 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:11.190 20:27:55 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:17:11.190 20:27:55 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:17:11.190 20:27:55 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.190 20:27:55 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:11.190 [2024-12-12 20:27:55.355430] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:11.190 [2024-12-12 20:27:55.356946] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:11.190 20:27:55 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.190 20:27:55 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:17:11.190 20:27:55 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.190 20:27:55 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:11.448 malloc0 00:17:11.448 20:27:55 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.448 20:27:55 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:17:11.448 20:27:55 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.448 20:27:55 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:11.448 [2024-12-12 20:27:55.435744] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:17:11.448 [2024-12-12 20:27:55.435823] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:17:11.448 [2024-12-12 20:27:55.435832] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:17:11.448 [2024-12-12 20:27:55.435838] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:17:11.448 [2024-12-12 20:27:55.444514] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:11.448 [2024-12-12 20:27:55.444532] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:11.448 [2024-12-12 20:27:55.451434] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:11.448 [2024-12-12 20:27:55.451552] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:17:11.448 [2024-12-12 20:27:55.467436] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:17:11.448 1 00:17:11.448 20:27:55 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.448 20:27:55 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:17:12.383 20:27:56 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=75894 00:17:12.383 20:27:56 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:17:12.383 20:27:56 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:17:12.383 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:12.383 fio-3.35 00:17:12.383 Starting 1 process 00:17:17.653 20:28:01 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 75863 00:17:17.653 20:28:01 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:17:22.919 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 75863 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:17:22.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:22.919 20:28:06 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=76004 00:17:22.919 20:28:06 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:22.919 20:28:06 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 76004 00:17:22.919 20:28:06 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:17:22.919 20:28:06 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 76004 ']' 00:17:22.919 20:28:06 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:22.919 20:28:06 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:22.919 20:28:06 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:22.919 20:28:06 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:22.919 20:28:06 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:22.919 [2024-12-12 20:28:06.563372] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:17:22.919 [2024-12-12 20:28:06.563698] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76004 ] 00:17:22.919 [2024-12-12 20:28:06.817956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:22.919 [2024-12-12 20:28:06.940282] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:22.919 [2024-12-12 20:28:06.940488] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:23.485 20:28:07 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:23.485 20:28:07 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:17:23.485 20:28:07 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:17:23.485 20:28:07 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.485 20:28:07 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:23.485 [2024-12-12 20:28:07.447435] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:23.485 [2024-12-12 20:28:07.449027] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:23.485 20:28:07 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.485 20:28:07 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:17:23.485 20:28:07 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.485 20:28:07 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:23.485 malloc0 00:17:23.485 20:28:07 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.485 20:28:07 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:17:23.485 20:28:07 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.485 20:28:07 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:23.485 [2024-12-12 20:28:07.535541] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:17:23.485 [2024-12-12 20:28:07.535574] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:17:23.485 [2024-12-12 20:28:07.535582] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:17:23.485 [2024-12-12 20:28:07.543458] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:17:23.485 [2024-12-12 20:28:07.543479] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 2 00:17:23.485 [2024-12-12 20:28:07.543486] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:17:23.485 [2024-12-12 20:28:07.543550] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:17:23.485 1 00:17:23.485 20:28:07 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.485 20:28:07 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 75894 00:17:23.485 [2024-12-12 20:28:07.551436] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:17:23.485 [2024-12-12 20:28:07.557742] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:17:23.485 [2024-12-12 20:28:07.565604] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:17:23.485 [2024-12-12 20:28:07.565623] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:18:19.715 00:18:19.715 fio_test: (groupid=0, jobs=1): err= 0: pid=75902: Thu Dec 12 20:28:56 2024 00:18:19.715 read: IOPS=27.7k, BW=108MiB/s (113MB/s)(6493MiB/60001msec) 00:18:19.715 slat (nsec): min=941, max=211011, avg=4837.74, stdev=1658.99 00:18:19.715 clat (usec): min=588, max=6094.2k, avg=2247.08, stdev=35981.69 00:18:19.715 lat (usec): min=593, max=6094.2k, avg=2251.91, stdev=35981.68 00:18:19.715 clat percentiles (usec): 00:18:19.715 | 1.00th=[ 1663], 5.00th=[ 1795], 10.00th=[ 1827], 20.00th=[ 1860], 00:18:19.715 | 30.00th=[ 1876], 40.00th=[ 1893], 50.00th=[ 1909], 60.00th=[ 1926], 00:18:19.715 | 70.00th=[ 1958], 80.00th=[ 1975], 90.00th=[ 2057], 95.00th=[ 2933], 00:18:19.715 | 99.00th=[ 4948], 99.50th=[ 5407], 99.90th=[ 6915], 99.95th=[ 7963], 00:18:19.715 | 99.99th=[13173] 00:18:19.715 bw ( KiB/s): min=13640, max=129616, per=100.00%, avg=122046.76, stdev=15376.55, samples=108 00:18:19.715 iops : min= 3410, max=32404, avg=30511.68, stdev=3844.13, samples=108 00:18:19.715 write: IOPS=27.7k, BW=108MiB/s (113MB/s)(6488MiB/60001msec); 0 zone resets 00:18:19.715 slat (nsec): min=981, max=217632, avg=4863.70, stdev=1639.86 00:18:19.715 clat (usec): min=652, max=6094.3k, avg=2364.18, stdev=39544.10 00:18:19.715 lat (usec): min=657, max=6094.3k, avg=2369.05, stdev=39544.09 00:18:19.715 clat percentiles (usec): 00:18:19.715 | 1.00th=[ 1696], 5.00th=[ 1876], 10.00th=[ 1909], 20.00th=[ 1942], 00:18:19.715 | 30.00th=[ 1958], 40.00th=[ 1975], 50.00th=[ 1991], 60.00th=[ 2024], 00:18:19.715 | 70.00th=[ 2040], 80.00th=[ 2073], 90.00th=[ 2147], 95.00th=[ 2868], 00:18:19.715 | 99.00th=[ 4948], 99.50th=[ 5407], 99.90th=[ 6980], 99.95th=[ 7963], 00:18:19.715 | 99.99th=[13304] 00:18:19.715 bw ( KiB/s): min=13408, max=129464, per=100.00%, avg=121937.00, stdev=15505.02, samples=108 00:18:19.715 iops : min= 3352, max=32366, avg=30484.24, stdev=3876.25, samples=108 00:18:19.715 lat (usec) : 750=0.01%, 1000=0.01% 00:18:19.715 lat (msec) : 2=67.55%, 4=30.03%, 10=2.38%, 20=0.03%, >=2000=0.01% 00:18:19.715 cpu : usr=6.16%, sys=27.27%, ctx=113639, majf=0, minf=14 00:18:19.715 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:18:19.715 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:19.715 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:19.715 issued rwts: total=1662310,1661040,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:19.715 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:19.715 00:18:19.715 Run status group 0 (all jobs): 00:18:19.715 READ: bw=108MiB/s (113MB/s), 108MiB/s-108MiB/s (113MB/s-113MB/s), io=6493MiB (6809MB), run=60001-60001msec 00:18:19.715 WRITE: bw=108MiB/s (113MB/s), 108MiB/s-108MiB/s (113MB/s-113MB/s), io=6488MiB (6804MB), run=60001-60001msec 00:18:19.715 00:18:19.715 Disk stats (read/write): 00:18:19.715 ublkb1: ios=1658976/1657576, merge=0/0, ticks=3644262/3706183, in_queue=7350445, util=99.89% 00:18:19.715 20:28:56 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:18:19.715 20:28:56 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.715 20:28:56 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:19.715 [2024-12-12 20:28:56.728814] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:18:19.715 [2024-12-12 20:28:56.784461] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:19.715 [2024-12-12 20:28:56.784682] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:18:19.715 [2024-12-12 20:28:56.792440] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:19.715 [2024-12-12 20:28:56.792596] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:18:19.715 [2024-12-12 20:28:56.792623] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:18:19.715 20:28:56 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.715 20:28:56 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:18:19.715 20:28:56 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.715 20:28:56 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:19.715 [2024-12-12 20:28:56.808505] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:18:19.715 [2024-12-12 20:28:56.812139] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:18:19.715 [2024-12-12 20:28:56.812172] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:18:19.715 20:28:56 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.715 20:28:56 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:18:19.715 20:28:56 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:18:19.715 20:28:56 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 76004 00:18:19.715 20:28:56 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 76004 ']' 00:18:19.715 20:28:56 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 76004 00:18:19.715 20:28:56 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:18:19.715 20:28:56 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:19.715 20:28:56 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76004 00:18:19.715 killing process with pid 76004 00:18:19.715 20:28:56 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:19.715 20:28:56 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:19.715 20:28:56 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76004' 00:18:19.715 20:28:56 ublk_recovery -- common/autotest_common.sh@973 -- # kill 76004 00:18:19.715 20:28:56 ublk_recovery -- common/autotest_common.sh@978 -- # wait 76004 00:18:19.715 [2024-12-12 20:28:57.963360] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:18:19.715 [2024-12-12 20:28:57.963403] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:18:19.715 ************************************ 00:18:19.715 END TEST ublk_recovery 00:18:19.715 ************************************ 00:18:19.715 00:18:19.715 real 1m4.377s 00:18:19.715 user 1m45.783s 00:18:19.715 sys 0m32.202s 00:18:19.715 20:28:58 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:19.715 20:28:58 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:19.715 20:28:58 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:18:19.715 20:28:58 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:18:19.715 20:28:58 -- spdk/autotest.sh@260 -- # timing_exit lib 00:18:19.715 20:28:58 -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:19.715 20:28:58 -- common/autotest_common.sh@10 -- # set +x 00:18:19.715 20:28:58 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:18:19.715 20:28:58 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:18:19.715 20:28:58 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:18:19.715 20:28:58 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:18:19.715 20:28:58 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:18:19.715 20:28:58 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:18:19.715 20:28:58 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:18:19.715 20:28:58 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:18:19.715 20:28:58 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:18:19.715 20:28:58 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:18:19.715 20:28:58 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:18:19.715 20:28:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:19.715 20:28:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:19.715 20:28:58 -- common/autotest_common.sh@10 -- # set +x 00:18:19.715 ************************************ 00:18:19.715 START TEST ftl 00:18:19.715 ************************************ 00:18:19.715 20:28:58 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:18:19.715 * Looking for test storage... 00:18:19.715 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:19.715 20:28:58 ftl -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:19.715 20:28:58 ftl -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:19.715 20:28:58 ftl -- common/autotest_common.sh@1711 -- # lcov --version 00:18:19.715 20:28:58 ftl -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:19.715 20:28:58 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:19.715 20:28:58 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:19.715 20:28:58 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:19.715 20:28:58 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:18:19.715 20:28:58 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:18:19.715 20:28:58 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:18:19.715 20:28:58 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:18:19.715 20:28:58 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:18:19.715 20:28:58 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:18:19.715 20:28:58 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:18:19.715 20:28:58 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:19.715 20:28:58 ftl -- scripts/common.sh@344 -- # case "$op" in 00:18:19.715 20:28:58 ftl -- scripts/common.sh@345 -- # : 1 00:18:19.715 20:28:58 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:19.715 20:28:58 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:19.715 20:28:58 ftl -- scripts/common.sh@365 -- # decimal 1 00:18:19.715 20:28:58 ftl -- scripts/common.sh@353 -- # local d=1 00:18:19.715 20:28:58 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:19.715 20:28:58 ftl -- scripts/common.sh@355 -- # echo 1 00:18:19.715 20:28:58 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:18:19.715 20:28:58 ftl -- scripts/common.sh@366 -- # decimal 2 00:18:19.715 20:28:58 ftl -- scripts/common.sh@353 -- # local d=2 00:18:19.715 20:28:58 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:19.715 20:28:58 ftl -- scripts/common.sh@355 -- # echo 2 00:18:19.716 20:28:58 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:18:19.716 20:28:58 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:19.716 20:28:58 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:19.716 20:28:58 ftl -- scripts/common.sh@368 -- # return 0 00:18:19.716 20:28:58 ftl -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:19.716 20:28:58 ftl -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:19.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:19.716 --rc genhtml_branch_coverage=1 00:18:19.716 --rc genhtml_function_coverage=1 00:18:19.716 --rc genhtml_legend=1 00:18:19.716 --rc geninfo_all_blocks=1 00:18:19.716 --rc geninfo_unexecuted_blocks=1 00:18:19.716 00:18:19.716 ' 00:18:19.716 20:28:58 ftl -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:19.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:19.716 --rc genhtml_branch_coverage=1 00:18:19.716 --rc genhtml_function_coverage=1 00:18:19.716 --rc genhtml_legend=1 00:18:19.716 --rc geninfo_all_blocks=1 00:18:19.716 --rc geninfo_unexecuted_blocks=1 00:18:19.716 00:18:19.716 ' 00:18:19.716 20:28:58 ftl -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:19.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:19.716 --rc genhtml_branch_coverage=1 00:18:19.716 --rc genhtml_function_coverage=1 00:18:19.716 --rc genhtml_legend=1 00:18:19.716 --rc geninfo_all_blocks=1 00:18:19.716 --rc geninfo_unexecuted_blocks=1 00:18:19.716 00:18:19.716 ' 00:18:19.716 20:28:58 ftl -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:19.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:19.716 --rc genhtml_branch_coverage=1 00:18:19.716 --rc genhtml_function_coverage=1 00:18:19.716 --rc genhtml_legend=1 00:18:19.716 --rc geninfo_all_blocks=1 00:18:19.716 --rc geninfo_unexecuted_blocks=1 00:18:19.716 00:18:19.716 ' 00:18:19.716 20:28:58 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:19.716 20:28:58 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:18:19.716 20:28:58 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:19.716 20:28:58 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:19.716 20:28:58 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:19.716 20:28:58 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:19.716 20:28:58 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:19.716 20:28:58 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:19.716 20:28:58 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:19.716 20:28:58 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:19.716 20:28:58 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:19.716 20:28:58 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:19.716 20:28:58 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:19.716 20:28:58 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:19.716 20:28:58 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:19.716 20:28:58 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:19.716 20:28:58 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:19.716 20:28:58 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:19.716 20:28:58 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:19.716 20:28:58 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:19.716 20:28:58 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:19.716 20:28:58 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:19.716 20:28:58 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:19.716 20:28:58 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:19.716 20:28:58 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:19.716 20:28:58 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:19.716 20:28:58 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:19.716 20:28:58 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:19.716 20:28:58 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:19.716 20:28:58 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:19.716 20:28:58 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:18:19.716 20:28:58 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:18:19.716 20:28:58 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:18:19.716 20:28:58 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:18:19.716 20:28:58 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:19.716 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:19.716 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:19.716 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:19.716 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:19.716 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:19.716 20:28:59 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=76809 00:18:19.716 20:28:59 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:18:19.716 20:28:59 ftl -- ftl/ftl.sh@38 -- # waitforlisten 76809 00:18:19.716 20:28:59 ftl -- common/autotest_common.sh@835 -- # '[' -z 76809 ']' 00:18:19.716 20:28:59 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:19.716 20:28:59 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:19.716 20:28:59 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:19.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:19.716 20:28:59 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:19.716 20:28:59 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:19.716 [2024-12-12 20:28:59.436192] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:18:19.716 [2024-12-12 20:28:59.436479] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76809 ] 00:18:19.716 [2024-12-12 20:28:59.592962] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:19.716 [2024-12-12 20:28:59.674717] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:19.716 20:29:00 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:19.716 20:29:00 ftl -- common/autotest_common.sh@868 -- # return 0 00:18:19.716 20:29:00 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:18:19.716 20:29:00 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:18:19.716 20:29:01 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:18:19.716 20:29:01 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:18:19.716 20:29:01 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:18:19.716 20:29:01 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:18:19.716 20:29:01 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:18:19.716 20:29:01 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:18:19.716 20:29:01 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:18:19.716 20:29:01 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:18:19.716 20:29:01 ftl -- ftl/ftl.sh@50 -- # break 00:18:19.716 20:29:01 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:18:19.716 20:29:01 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:18:19.716 20:29:01 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:18:19.716 20:29:01 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:18:19.716 20:29:02 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:18:19.716 20:29:02 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:18:19.716 20:29:02 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:18:19.716 20:29:02 ftl -- ftl/ftl.sh@63 -- # break 00:18:19.716 20:29:02 ftl -- ftl/ftl.sh@66 -- # killprocess 76809 00:18:19.716 20:29:02 ftl -- common/autotest_common.sh@954 -- # '[' -z 76809 ']' 00:18:19.716 20:29:02 ftl -- common/autotest_common.sh@958 -- # kill -0 76809 00:18:19.716 20:29:02 ftl -- common/autotest_common.sh@959 -- # uname 00:18:19.716 20:29:02 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:19.716 20:29:02 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76809 00:18:19.716 killing process with pid 76809 00:18:19.716 20:29:02 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:19.716 20:29:02 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:19.716 20:29:02 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76809' 00:18:19.716 20:29:02 ftl -- common/autotest_common.sh@973 -- # kill 76809 00:18:19.716 20:29:02 ftl -- common/autotest_common.sh@978 -- # wait 76809 00:18:19.716 20:29:03 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:18:19.716 20:29:03 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:18:19.716 20:29:03 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:19.716 20:29:03 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:19.716 20:29:03 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:19.716 ************************************ 00:18:19.716 START TEST ftl_fio_basic 00:18:19.716 ************************************ 00:18:19.716 20:29:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:18:19.716 * Looking for test storage... 00:18:19.716 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:19.716 20:29:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:19.716 20:29:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:19.716 20:29:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lcov --version 00:18:19.716 20:29:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:19.716 20:29:03 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:19.716 20:29:03 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:19.716 20:29:03 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:19.716 20:29:03 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:18:19.716 20:29:03 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:18:19.716 20:29:03 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:18:19.716 20:29:03 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:18:19.716 20:29:03 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:18:19.716 20:29:03 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:18:19.716 20:29:03 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:18:19.716 20:29:03 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:19.717 20:29:03 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:18:19.717 20:29:03 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:18:19.717 20:29:03 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:19.717 20:29:03 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:19.717 20:29:03 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:18:19.717 20:29:03 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:18:19.717 20:29:03 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:19.717 20:29:03 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:18:19.717 20:29:03 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:18:19.717 20:29:03 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:18:19.717 20:29:03 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:18:19.717 20:29:03 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:19.717 20:29:03 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:18:19.717 20:29:03 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:18:19.717 20:29:03 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:19.717 20:29:03 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:19.717 20:29:03 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:18:19.717 20:29:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:19.717 20:29:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:19.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:19.717 --rc genhtml_branch_coverage=1 00:18:19.717 --rc genhtml_function_coverage=1 00:18:19.717 --rc genhtml_legend=1 00:18:19.717 --rc geninfo_all_blocks=1 00:18:19.717 --rc geninfo_unexecuted_blocks=1 00:18:19.717 00:18:19.717 ' 00:18:19.717 20:29:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:19.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:19.717 --rc genhtml_branch_coverage=1 00:18:19.717 --rc genhtml_function_coverage=1 00:18:19.717 --rc genhtml_legend=1 00:18:19.717 --rc geninfo_all_blocks=1 00:18:19.717 --rc geninfo_unexecuted_blocks=1 00:18:19.717 00:18:19.717 ' 00:18:19.717 20:29:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:19.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:19.717 --rc genhtml_branch_coverage=1 00:18:19.717 --rc genhtml_function_coverage=1 00:18:19.717 --rc genhtml_legend=1 00:18:19.717 --rc geninfo_all_blocks=1 00:18:19.717 --rc geninfo_unexecuted_blocks=1 00:18:19.717 00:18:19.717 ' 00:18:19.717 20:29:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:19.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:19.717 --rc genhtml_branch_coverage=1 00:18:19.717 --rc genhtml_function_coverage=1 00:18:19.717 --rc genhtml_legend=1 00:18:19.717 --rc geninfo_all_blocks=1 00:18:19.717 --rc geninfo_unexecuted_blocks=1 00:18:19.717 00:18:19.717 ' 00:18:19.717 20:29:03 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:19.717 20:29:03 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:18:19.717 20:29:03 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:19.717 20:29:03 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:19.717 20:29:03 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:19.717 20:29:03 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:19.717 20:29:03 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:19.717 20:29:03 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:19.717 20:29:03 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:19.717 20:29:03 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:19.717 20:29:03 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:19.717 20:29:03 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:19.717 20:29:03 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:19.717 20:29:03 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:19.717 20:29:03 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:19.717 20:29:03 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:19.717 20:29:03 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:19.717 20:29:03 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:19.717 20:29:03 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:19.717 20:29:03 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:19.717 20:29:03 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:19.717 20:29:03 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:19.717 20:29:03 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:19.717 20:29:03 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:19.717 20:29:03 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:19.717 20:29:03 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:19.717 20:29:03 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:19.717 20:29:03 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:19.717 20:29:03 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:19.717 20:29:03 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:18:19.717 20:29:03 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:18:19.717 20:29:03 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:18:19.717 20:29:03 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:18:19.717 20:29:03 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:19.717 20:29:03 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:18:19.717 20:29:03 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:18:19.717 20:29:03 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:18:19.717 20:29:03 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:18:19.717 20:29:03 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:18:19.717 20:29:03 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:18:19.717 20:29:03 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:18:19.717 20:29:03 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:18:19.717 20:29:03 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:18:19.717 20:29:03 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:19.717 20:29:03 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:19.717 20:29:03 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:18:19.717 20:29:03 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=76941 00:18:19.717 20:29:03 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 76941 00:18:19.717 20:29:03 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:18:19.717 20:29:03 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 76941 ']' 00:18:19.717 20:29:03 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:19.717 20:29:03 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:19.717 20:29:03 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:19.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:19.717 20:29:03 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:19.717 20:29:03 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:19.717 [2024-12-12 20:29:03.616685] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:18:19.717 [2024-12-12 20:29:03.616933] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76941 ] 00:18:19.717 [2024-12-12 20:29:03.770739] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:19.717 [2024-12-12 20:29:03.850283] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:19.717 [2024-12-12 20:29:03.850468] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:19.717 [2024-12-12 20:29:03.850488] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:18:20.285 20:29:04 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:20.285 20:29:04 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:18:20.285 20:29:04 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:20.285 20:29:04 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:18:20.285 20:29:04 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:20.285 20:29:04 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:18:20.285 20:29:04 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:18:20.285 20:29:04 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:20.542 20:29:04 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:20.542 20:29:04 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:18:20.542 20:29:04 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:20.542 20:29:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:18:20.542 20:29:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:20.542 20:29:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:18:20.542 20:29:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:18:20.542 20:29:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:20.800 20:29:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:20.800 { 00:18:20.800 "name": "nvme0n1", 00:18:20.800 "aliases": [ 00:18:20.800 "8f217352-dfa7-42f0-97b6-6bcd59e863f2" 00:18:20.800 ], 00:18:20.800 "product_name": "NVMe disk", 00:18:20.800 "block_size": 4096, 00:18:20.800 "num_blocks": 1310720, 00:18:20.800 "uuid": "8f217352-dfa7-42f0-97b6-6bcd59e863f2", 00:18:20.800 "numa_id": -1, 00:18:20.800 "assigned_rate_limits": { 00:18:20.800 "rw_ios_per_sec": 0, 00:18:20.800 "rw_mbytes_per_sec": 0, 00:18:20.800 "r_mbytes_per_sec": 0, 00:18:20.800 "w_mbytes_per_sec": 0 00:18:20.800 }, 00:18:20.800 "claimed": false, 00:18:20.800 "zoned": false, 00:18:20.800 "supported_io_types": { 00:18:20.800 "read": true, 00:18:20.800 "write": true, 00:18:20.800 "unmap": true, 00:18:20.800 "flush": true, 00:18:20.800 "reset": true, 00:18:20.800 "nvme_admin": true, 00:18:20.800 "nvme_io": true, 00:18:20.800 "nvme_io_md": false, 00:18:20.800 "write_zeroes": true, 00:18:20.800 "zcopy": false, 00:18:20.800 "get_zone_info": false, 00:18:20.800 "zone_management": false, 00:18:20.800 "zone_append": false, 00:18:20.800 "compare": true, 00:18:20.800 "compare_and_write": false, 00:18:20.800 "abort": true, 00:18:20.800 "seek_hole": false, 00:18:20.800 "seek_data": false, 00:18:20.800 "copy": true, 00:18:20.800 "nvme_iov_md": false 00:18:20.800 }, 00:18:20.800 "driver_specific": { 00:18:20.800 "nvme": [ 00:18:20.800 { 00:18:20.800 "pci_address": "0000:00:11.0", 00:18:20.800 "trid": { 00:18:20.800 "trtype": "PCIe", 00:18:20.800 "traddr": "0000:00:11.0" 00:18:20.800 }, 00:18:20.800 "ctrlr_data": { 00:18:20.800 "cntlid": 0, 00:18:20.800 "vendor_id": "0x1b36", 00:18:20.800 "model_number": "QEMU NVMe Ctrl", 00:18:20.800 "serial_number": "12341", 00:18:20.800 "firmware_revision": "8.0.0", 00:18:20.800 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:20.800 "oacs": { 00:18:20.800 "security": 0, 00:18:20.800 "format": 1, 00:18:20.800 "firmware": 0, 00:18:20.800 "ns_manage": 1 00:18:20.800 }, 00:18:20.800 "multi_ctrlr": false, 00:18:20.800 "ana_reporting": false 00:18:20.800 }, 00:18:20.800 "vs": { 00:18:20.800 "nvme_version": "1.4" 00:18:20.800 }, 00:18:20.800 "ns_data": { 00:18:20.800 "id": 1, 00:18:20.800 "can_share": false 00:18:20.800 } 00:18:20.800 } 00:18:20.800 ], 00:18:20.800 "mp_policy": "active_passive" 00:18:20.800 } 00:18:20.800 } 00:18:20.800 ]' 00:18:20.800 20:29:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:20.800 20:29:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:18:20.800 20:29:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:20.800 20:29:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:18:20.800 20:29:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:18:20.800 20:29:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:18:20.800 20:29:04 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:18:20.800 20:29:04 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:20.800 20:29:04 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:18:20.800 20:29:04 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:20.800 20:29:04 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:21.058 20:29:05 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:18:21.058 20:29:05 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:21.319 20:29:05 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=960bfb9a-f546-48a9-a911-7a7e8dd2ac4d 00:18:21.319 20:29:05 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 960bfb9a-f546-48a9-a911-7a7e8dd2ac4d 00:18:21.581 20:29:05 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=c86826f9-e281-4e71-acce-f40f26ed88fe 00:18:21.581 20:29:05 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 c86826f9-e281-4e71-acce-f40f26ed88fe 00:18:21.581 20:29:05 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:18:21.581 20:29:05 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:21.581 20:29:05 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=c86826f9-e281-4e71-acce-f40f26ed88fe 00:18:21.581 20:29:05 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:18:21.581 20:29:05 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size c86826f9-e281-4e71-acce-f40f26ed88fe 00:18:21.581 20:29:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=c86826f9-e281-4e71-acce-f40f26ed88fe 00:18:21.581 20:29:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:21.581 20:29:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:18:21.581 20:29:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:18:21.581 20:29:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c86826f9-e281-4e71-acce-f40f26ed88fe 00:18:21.840 20:29:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:21.840 { 00:18:21.840 "name": "c86826f9-e281-4e71-acce-f40f26ed88fe", 00:18:21.840 "aliases": [ 00:18:21.840 "lvs/nvme0n1p0" 00:18:21.840 ], 00:18:21.840 "product_name": "Logical Volume", 00:18:21.840 "block_size": 4096, 00:18:21.840 "num_blocks": 26476544, 00:18:21.840 "uuid": "c86826f9-e281-4e71-acce-f40f26ed88fe", 00:18:21.840 "assigned_rate_limits": { 00:18:21.840 "rw_ios_per_sec": 0, 00:18:21.840 "rw_mbytes_per_sec": 0, 00:18:21.840 "r_mbytes_per_sec": 0, 00:18:21.840 "w_mbytes_per_sec": 0 00:18:21.840 }, 00:18:21.840 "claimed": false, 00:18:21.840 "zoned": false, 00:18:21.840 "supported_io_types": { 00:18:21.840 "read": true, 00:18:21.840 "write": true, 00:18:21.840 "unmap": true, 00:18:21.840 "flush": false, 00:18:21.840 "reset": true, 00:18:21.840 "nvme_admin": false, 00:18:21.840 "nvme_io": false, 00:18:21.840 "nvme_io_md": false, 00:18:21.840 "write_zeroes": true, 00:18:21.840 "zcopy": false, 00:18:21.840 "get_zone_info": false, 00:18:21.840 "zone_management": false, 00:18:21.840 "zone_append": false, 00:18:21.840 "compare": false, 00:18:21.840 "compare_and_write": false, 00:18:21.840 "abort": false, 00:18:21.840 "seek_hole": true, 00:18:21.840 "seek_data": true, 00:18:21.840 "copy": false, 00:18:21.840 "nvme_iov_md": false 00:18:21.840 }, 00:18:21.840 "driver_specific": { 00:18:21.840 "lvol": { 00:18:21.840 "lvol_store_uuid": "960bfb9a-f546-48a9-a911-7a7e8dd2ac4d", 00:18:21.840 "base_bdev": "nvme0n1", 00:18:21.840 "thin_provision": true, 00:18:21.840 "num_allocated_clusters": 0, 00:18:21.840 "snapshot": false, 00:18:21.840 "clone": false, 00:18:21.840 "esnap_clone": false 00:18:21.840 } 00:18:21.840 } 00:18:21.840 } 00:18:21.840 ]' 00:18:21.840 20:29:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:21.840 20:29:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:18:21.840 20:29:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:21.840 20:29:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:21.840 20:29:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:21.840 20:29:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:18:21.840 20:29:05 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:18:21.840 20:29:05 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:18:21.840 20:29:05 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:22.099 20:29:06 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:22.099 20:29:06 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:22.099 20:29:06 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size c86826f9-e281-4e71-acce-f40f26ed88fe 00:18:22.099 20:29:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=c86826f9-e281-4e71-acce-f40f26ed88fe 00:18:22.099 20:29:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:22.099 20:29:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:18:22.099 20:29:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:18:22.099 20:29:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c86826f9-e281-4e71-acce-f40f26ed88fe 00:18:22.359 20:29:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:22.359 { 00:18:22.359 "name": "c86826f9-e281-4e71-acce-f40f26ed88fe", 00:18:22.359 "aliases": [ 00:18:22.359 "lvs/nvme0n1p0" 00:18:22.359 ], 00:18:22.359 "product_name": "Logical Volume", 00:18:22.359 "block_size": 4096, 00:18:22.359 "num_blocks": 26476544, 00:18:22.359 "uuid": "c86826f9-e281-4e71-acce-f40f26ed88fe", 00:18:22.359 "assigned_rate_limits": { 00:18:22.359 "rw_ios_per_sec": 0, 00:18:22.359 "rw_mbytes_per_sec": 0, 00:18:22.359 "r_mbytes_per_sec": 0, 00:18:22.359 "w_mbytes_per_sec": 0 00:18:22.359 }, 00:18:22.359 "claimed": false, 00:18:22.359 "zoned": false, 00:18:22.359 "supported_io_types": { 00:18:22.359 "read": true, 00:18:22.359 "write": true, 00:18:22.359 "unmap": true, 00:18:22.359 "flush": false, 00:18:22.359 "reset": true, 00:18:22.359 "nvme_admin": false, 00:18:22.359 "nvme_io": false, 00:18:22.359 "nvme_io_md": false, 00:18:22.359 "write_zeroes": true, 00:18:22.359 "zcopy": false, 00:18:22.359 "get_zone_info": false, 00:18:22.359 "zone_management": false, 00:18:22.359 "zone_append": false, 00:18:22.359 "compare": false, 00:18:22.359 "compare_and_write": false, 00:18:22.359 "abort": false, 00:18:22.359 "seek_hole": true, 00:18:22.359 "seek_data": true, 00:18:22.359 "copy": false, 00:18:22.359 "nvme_iov_md": false 00:18:22.359 }, 00:18:22.359 "driver_specific": { 00:18:22.359 "lvol": { 00:18:22.359 "lvol_store_uuid": "960bfb9a-f546-48a9-a911-7a7e8dd2ac4d", 00:18:22.359 "base_bdev": "nvme0n1", 00:18:22.359 "thin_provision": true, 00:18:22.359 "num_allocated_clusters": 0, 00:18:22.359 "snapshot": false, 00:18:22.359 "clone": false, 00:18:22.359 "esnap_clone": false 00:18:22.359 } 00:18:22.359 } 00:18:22.359 } 00:18:22.359 ]' 00:18:22.359 20:29:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:22.359 20:29:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:18:22.359 20:29:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:22.359 20:29:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:22.359 20:29:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:22.359 20:29:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:18:22.359 20:29:06 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:18:22.359 20:29:06 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:22.620 20:29:06 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:18:22.620 20:29:06 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:18:22.620 20:29:06 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:18:22.620 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:18:22.620 20:29:06 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size c86826f9-e281-4e71-acce-f40f26ed88fe 00:18:22.620 20:29:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=c86826f9-e281-4e71-acce-f40f26ed88fe 00:18:22.620 20:29:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:22.620 20:29:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:18:22.620 20:29:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:18:22.620 20:29:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c86826f9-e281-4e71-acce-f40f26ed88fe 00:18:22.881 20:29:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:22.881 { 00:18:22.881 "name": "c86826f9-e281-4e71-acce-f40f26ed88fe", 00:18:22.881 "aliases": [ 00:18:22.881 "lvs/nvme0n1p0" 00:18:22.881 ], 00:18:22.881 "product_name": "Logical Volume", 00:18:22.881 "block_size": 4096, 00:18:22.881 "num_blocks": 26476544, 00:18:22.881 "uuid": "c86826f9-e281-4e71-acce-f40f26ed88fe", 00:18:22.881 "assigned_rate_limits": { 00:18:22.881 "rw_ios_per_sec": 0, 00:18:22.881 "rw_mbytes_per_sec": 0, 00:18:22.881 "r_mbytes_per_sec": 0, 00:18:22.881 "w_mbytes_per_sec": 0 00:18:22.881 }, 00:18:22.881 "claimed": false, 00:18:22.881 "zoned": false, 00:18:22.881 "supported_io_types": { 00:18:22.881 "read": true, 00:18:22.881 "write": true, 00:18:22.881 "unmap": true, 00:18:22.881 "flush": false, 00:18:22.881 "reset": true, 00:18:22.881 "nvme_admin": false, 00:18:22.881 "nvme_io": false, 00:18:22.881 "nvme_io_md": false, 00:18:22.881 "write_zeroes": true, 00:18:22.881 "zcopy": false, 00:18:22.881 "get_zone_info": false, 00:18:22.881 "zone_management": false, 00:18:22.881 "zone_append": false, 00:18:22.881 "compare": false, 00:18:22.881 "compare_and_write": false, 00:18:22.881 "abort": false, 00:18:22.881 "seek_hole": true, 00:18:22.881 "seek_data": true, 00:18:22.881 "copy": false, 00:18:22.881 "nvme_iov_md": false 00:18:22.881 }, 00:18:22.881 "driver_specific": { 00:18:22.881 "lvol": { 00:18:22.881 "lvol_store_uuid": "960bfb9a-f546-48a9-a911-7a7e8dd2ac4d", 00:18:22.881 "base_bdev": "nvme0n1", 00:18:22.881 "thin_provision": true, 00:18:22.881 "num_allocated_clusters": 0, 00:18:22.881 "snapshot": false, 00:18:22.881 "clone": false, 00:18:22.881 "esnap_clone": false 00:18:22.881 } 00:18:22.881 } 00:18:22.881 } 00:18:22.881 ]' 00:18:22.881 20:29:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:22.881 20:29:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:18:22.882 20:29:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:22.882 20:29:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:22.882 20:29:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:22.882 20:29:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:18:22.882 20:29:06 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:18:22.882 20:29:06 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:18:22.882 20:29:06 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d c86826f9-e281-4e71-acce-f40f26ed88fe -c nvc0n1p0 --l2p_dram_limit 60 00:18:23.141 [2024-12-12 20:29:07.128930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.141 [2024-12-12 20:29:07.128971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:23.141 [2024-12-12 20:29:07.128985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:23.141 [2024-12-12 20:29:07.128992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.141 [2024-12-12 20:29:07.129043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.141 [2024-12-12 20:29:07.129054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:23.141 [2024-12-12 20:29:07.129063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:18:23.141 [2024-12-12 20:29:07.129069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.141 [2024-12-12 20:29:07.129098] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:23.141 [2024-12-12 20:29:07.130019] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:23.141 [2024-12-12 20:29:07.130060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.141 [2024-12-12 20:29:07.130069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:23.141 [2024-12-12 20:29:07.130079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.970 ms 00:18:23.141 [2024-12-12 20:29:07.130085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.141 [2024-12-12 20:29:07.130201] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 74c47d0a-564e-4b87-bb36-8c50c8e3f62a 00:18:23.141 [2024-12-12 20:29:07.131222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.141 [2024-12-12 20:29:07.131241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:23.141 [2024-12-12 20:29:07.131249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:18:23.141 [2024-12-12 20:29:07.131279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.141 [2024-12-12 20:29:07.136524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.141 [2024-12-12 20:29:07.136551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:23.141 [2024-12-12 20:29:07.136559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.191 ms 00:18:23.141 [2024-12-12 20:29:07.136567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.141 [2024-12-12 20:29:07.136643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.141 [2024-12-12 20:29:07.136655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:23.141 [2024-12-12 20:29:07.136661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:18:23.141 [2024-12-12 20:29:07.136671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.141 [2024-12-12 20:29:07.136712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.141 [2024-12-12 20:29:07.136721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:23.141 [2024-12-12 20:29:07.136727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:23.141 [2024-12-12 20:29:07.136734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.141 [2024-12-12 20:29:07.136754] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:23.141 [2024-12-12 20:29:07.139755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.141 [2024-12-12 20:29:07.139776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:23.141 [2024-12-12 20:29:07.139787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.004 ms 00:18:23.141 [2024-12-12 20:29:07.139794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.141 [2024-12-12 20:29:07.139824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.141 [2024-12-12 20:29:07.139831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:23.141 [2024-12-12 20:29:07.139839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:23.141 [2024-12-12 20:29:07.139844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.141 [2024-12-12 20:29:07.139862] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:23.142 [2024-12-12 20:29:07.139984] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:23.142 [2024-12-12 20:29:07.139999] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:23.142 [2024-12-12 20:29:07.140008] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:23.142 [2024-12-12 20:29:07.140018] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:23.142 [2024-12-12 20:29:07.140025] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:23.142 [2024-12-12 20:29:07.140034] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:23.142 [2024-12-12 20:29:07.140040] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:23.142 [2024-12-12 20:29:07.140046] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:23.142 [2024-12-12 20:29:07.140052] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:23.142 [2024-12-12 20:29:07.140060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.142 [2024-12-12 20:29:07.140067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:23.142 [2024-12-12 20:29:07.140074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.199 ms 00:18:23.142 [2024-12-12 20:29:07.140080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.142 [2024-12-12 20:29:07.140150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.142 [2024-12-12 20:29:07.140156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:23.142 [2024-12-12 20:29:07.140163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:18:23.142 [2024-12-12 20:29:07.140169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.142 [2024-12-12 20:29:07.140253] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:23.142 [2024-12-12 20:29:07.140260] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:23.142 [2024-12-12 20:29:07.140269] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:23.142 [2024-12-12 20:29:07.140275] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:23.142 [2024-12-12 20:29:07.140283] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:23.142 [2024-12-12 20:29:07.140288] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:23.142 [2024-12-12 20:29:07.140295] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:23.142 [2024-12-12 20:29:07.140300] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:23.142 [2024-12-12 20:29:07.140308] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:23.142 [2024-12-12 20:29:07.140313] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:23.142 [2024-12-12 20:29:07.140320] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:23.142 [2024-12-12 20:29:07.140326] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:23.142 [2024-12-12 20:29:07.140332] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:23.142 [2024-12-12 20:29:07.140337] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:23.142 [2024-12-12 20:29:07.140343] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:23.142 [2024-12-12 20:29:07.140348] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:23.142 [2024-12-12 20:29:07.140357] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:23.142 [2024-12-12 20:29:07.140362] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:23.142 [2024-12-12 20:29:07.140368] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:23.142 [2024-12-12 20:29:07.140374] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:23.142 [2024-12-12 20:29:07.140383] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:23.142 [2024-12-12 20:29:07.140389] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:23.142 [2024-12-12 20:29:07.140395] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:23.142 [2024-12-12 20:29:07.140400] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:23.142 [2024-12-12 20:29:07.140406] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:23.142 [2024-12-12 20:29:07.140423] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:23.142 [2024-12-12 20:29:07.140431] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:23.142 [2024-12-12 20:29:07.140436] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:23.142 [2024-12-12 20:29:07.140443] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:23.142 [2024-12-12 20:29:07.140449] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:23.142 [2024-12-12 20:29:07.140455] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:23.142 [2024-12-12 20:29:07.140460] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:23.142 [2024-12-12 20:29:07.140468] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:23.142 [2024-12-12 20:29:07.140483] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:23.142 [2024-12-12 20:29:07.140489] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:23.142 [2024-12-12 20:29:07.140494] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:23.142 [2024-12-12 20:29:07.140501] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:23.142 [2024-12-12 20:29:07.140506] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:23.142 [2024-12-12 20:29:07.140512] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:23.142 [2024-12-12 20:29:07.140518] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:23.142 [2024-12-12 20:29:07.140524] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:23.142 [2024-12-12 20:29:07.140529] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:23.142 [2024-12-12 20:29:07.140536] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:23.142 [2024-12-12 20:29:07.140541] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:23.142 [2024-12-12 20:29:07.140549] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:23.142 [2024-12-12 20:29:07.140554] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:23.142 [2024-12-12 20:29:07.140561] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:23.142 [2024-12-12 20:29:07.140568] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:23.142 [2024-12-12 20:29:07.140576] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:23.142 [2024-12-12 20:29:07.140581] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:23.142 [2024-12-12 20:29:07.140588] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:23.142 [2024-12-12 20:29:07.140593] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:23.142 [2024-12-12 20:29:07.140601] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:23.142 [2024-12-12 20:29:07.140607] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:23.142 [2024-12-12 20:29:07.140616] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:23.142 [2024-12-12 20:29:07.140623] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:23.142 [2024-12-12 20:29:07.140630] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:23.142 [2024-12-12 20:29:07.140636] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:23.142 [2024-12-12 20:29:07.140642] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:23.142 [2024-12-12 20:29:07.140648] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:23.142 [2024-12-12 20:29:07.140656] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:23.142 [2024-12-12 20:29:07.140662] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:23.142 [2024-12-12 20:29:07.140669] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:23.142 [2024-12-12 20:29:07.140674] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:23.142 [2024-12-12 20:29:07.140682] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:23.142 [2024-12-12 20:29:07.140688] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:23.142 [2024-12-12 20:29:07.140695] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:23.142 [2024-12-12 20:29:07.140700] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:23.142 [2024-12-12 20:29:07.140707] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:23.142 [2024-12-12 20:29:07.140713] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:23.142 [2024-12-12 20:29:07.140720] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:23.142 [2024-12-12 20:29:07.140727] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:23.142 [2024-12-12 20:29:07.140734] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:23.142 [2024-12-12 20:29:07.140740] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:23.142 [2024-12-12 20:29:07.140748] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:23.142 [2024-12-12 20:29:07.140754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.142 [2024-12-12 20:29:07.140761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:23.142 [2024-12-12 20:29:07.140767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.558 ms 00:18:23.142 [2024-12-12 20:29:07.140773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.142 [2024-12-12 20:29:07.140824] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:23.142 [2024-12-12 20:29:07.140837] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:25.685 [2024-12-12 20:29:09.654791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.685 [2024-12-12 20:29:09.654850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:25.685 [2024-12-12 20:29:09.654864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2513.955 ms 00:18:25.685 [2024-12-12 20:29:09.654874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.685 [2024-12-12 20:29:09.679802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.685 [2024-12-12 20:29:09.679843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:25.685 [2024-12-12 20:29:09.679856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.733 ms 00:18:25.685 [2024-12-12 20:29:09.679865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.685 [2024-12-12 20:29:09.679987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.685 [2024-12-12 20:29:09.680004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:25.685 [2024-12-12 20:29:09.680013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:18:25.685 [2024-12-12 20:29:09.680023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.685 [2024-12-12 20:29:09.721208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.685 [2024-12-12 20:29:09.721476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:25.685 [2024-12-12 20:29:09.721567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.146 ms 00:18:25.685 [2024-12-12 20:29:09.721615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.685 [2024-12-12 20:29:09.721684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.685 [2024-12-12 20:29:09.721727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:25.685 [2024-12-12 20:29:09.721768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:25.685 [2024-12-12 20:29:09.721885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.685 [2024-12-12 20:29:09.722288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.685 [2024-12-12 20:29:09.722448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:25.685 [2024-12-12 20:29:09.722531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.282 ms 00:18:25.685 [2024-12-12 20:29:09.722664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.685 [2024-12-12 20:29:09.722874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.685 [2024-12-12 20:29:09.722944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:25.685 [2024-12-12 20:29:09.723061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:18:25.685 [2024-12-12 20:29:09.723077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.685 [2024-12-12 20:29:09.737166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.685 [2024-12-12 20:29:09.737326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:25.685 [2024-12-12 20:29:09.737382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.064 ms 00:18:25.685 [2024-12-12 20:29:09.737453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.685 [2024-12-12 20:29:09.748606] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:25.685 [2024-12-12 20:29:09.762335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.685 [2024-12-12 20:29:09.762434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:25.685 [2024-12-12 20:29:09.762486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.773 ms 00:18:25.685 [2024-12-12 20:29:09.762536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.685 [2024-12-12 20:29:09.809263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.685 [2024-12-12 20:29:09.809504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:25.685 [2024-12-12 20:29:09.809584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.664 ms 00:18:25.685 [2024-12-12 20:29:09.809629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.685 [2024-12-12 20:29:09.809843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.685 [2024-12-12 20:29:09.809957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:25.685 [2024-12-12 20:29:09.810034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.139 ms 00:18:25.685 [2024-12-12 20:29:09.810091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.685 [2024-12-12 20:29:09.833151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.685 [2024-12-12 20:29:09.833325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:25.685 [2024-12-12 20:29:09.833381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.929 ms 00:18:25.686 [2024-12-12 20:29:09.833437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.686 [2024-12-12 20:29:09.855455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.686 [2024-12-12 20:29:09.855607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:25.686 [2024-12-12 20:29:09.855719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.942 ms 00:18:25.686 [2024-12-12 20:29:09.855809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.686 [2024-12-12 20:29:09.856430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.686 [2024-12-12 20:29:09.856559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:25.686 [2024-12-12 20:29:09.856662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.542 ms 00:18:25.686 [2024-12-12 20:29:09.856718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.946 [2024-12-12 20:29:09.917614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.946 [2024-12-12 20:29:09.917808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:25.946 [2024-12-12 20:29:09.917918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.803 ms 00:18:25.946 [2024-12-12 20:29:09.918059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.946 [2024-12-12 20:29:09.941901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.946 [2024-12-12 20:29:09.942059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:25.946 [2024-12-12 20:29:09.942156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.659 ms 00:18:25.946 [2024-12-12 20:29:09.942213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.946 [2024-12-12 20:29:09.964731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.946 [2024-12-12 20:29:09.964885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:25.946 [2024-12-12 20:29:09.964982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.419 ms 00:18:25.946 [2024-12-12 20:29:09.965068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.946 [2024-12-12 20:29:09.988103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.946 [2024-12-12 20:29:09.988255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:25.946 [2024-12-12 20:29:09.988353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.952 ms 00:18:25.947 [2024-12-12 20:29:09.988424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.947 [2024-12-12 20:29:09.988556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.947 [2024-12-12 20:29:09.988662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:25.947 [2024-12-12 20:29:09.988762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:25.947 [2024-12-12 20:29:09.988854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.947 [2024-12-12 20:29:09.989015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.947 [2024-12-12 20:29:09.989131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:25.947 [2024-12-12 20:29:09.989227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:18:25.947 [2024-12-12 20:29:09.989335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.947 [2024-12-12 20:29:09.990225] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2860.885 ms, result 0 00:18:25.947 { 00:18:25.947 "name": "ftl0", 00:18:25.947 "uuid": "74c47d0a-564e-4b87-bb36-8c50c8e3f62a" 00:18:25.947 } 00:18:25.947 20:29:10 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:18:25.947 20:29:10 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:18:25.947 20:29:10 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:25.947 20:29:10 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:18:25.947 20:29:10 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:25.947 20:29:10 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:25.947 20:29:10 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:26.207 20:29:10 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:18:26.207 [ 00:18:26.207 { 00:18:26.207 "name": "ftl0", 00:18:26.207 "aliases": [ 00:18:26.207 "74c47d0a-564e-4b87-bb36-8c50c8e3f62a" 00:18:26.207 ], 00:18:26.207 "product_name": "FTL disk", 00:18:26.207 "block_size": 4096, 00:18:26.207 "num_blocks": 20971520, 00:18:26.207 "uuid": "74c47d0a-564e-4b87-bb36-8c50c8e3f62a", 00:18:26.207 "assigned_rate_limits": { 00:18:26.207 "rw_ios_per_sec": 0, 00:18:26.207 "rw_mbytes_per_sec": 0, 00:18:26.207 "r_mbytes_per_sec": 0, 00:18:26.207 "w_mbytes_per_sec": 0 00:18:26.207 }, 00:18:26.207 "claimed": false, 00:18:26.207 "zoned": false, 00:18:26.207 "supported_io_types": { 00:18:26.207 "read": true, 00:18:26.207 "write": true, 00:18:26.207 "unmap": true, 00:18:26.207 "flush": true, 00:18:26.207 "reset": false, 00:18:26.207 "nvme_admin": false, 00:18:26.207 "nvme_io": false, 00:18:26.207 "nvme_io_md": false, 00:18:26.207 "write_zeroes": true, 00:18:26.207 "zcopy": false, 00:18:26.207 "get_zone_info": false, 00:18:26.207 "zone_management": false, 00:18:26.207 "zone_append": false, 00:18:26.207 "compare": false, 00:18:26.207 "compare_and_write": false, 00:18:26.207 "abort": false, 00:18:26.207 "seek_hole": false, 00:18:26.207 "seek_data": false, 00:18:26.207 "copy": false, 00:18:26.207 "nvme_iov_md": false 00:18:26.207 }, 00:18:26.207 "driver_specific": { 00:18:26.207 "ftl": { 00:18:26.207 "base_bdev": "c86826f9-e281-4e71-acce-f40f26ed88fe", 00:18:26.207 "cache": "nvc0n1p0" 00:18:26.207 } 00:18:26.207 } 00:18:26.207 } 00:18:26.207 ] 00:18:26.207 20:29:10 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:18:26.207 20:29:10 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:18:26.207 20:29:10 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:18:26.468 20:29:10 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:18:26.468 20:29:10 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:18:26.730 [2024-12-12 20:29:10.794265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.730 [2024-12-12 20:29:10.794603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:26.730 [2024-12-12 20:29:10.794668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:26.730 [2024-12-12 20:29:10.794721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.730 [2024-12-12 20:29:10.794794] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:26.730 [2024-12-12 20:29:10.797420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.730 [2024-12-12 20:29:10.797619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:26.730 [2024-12-12 20:29:10.797688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.562 ms 00:18:26.730 [2024-12-12 20:29:10.797730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.730 [2024-12-12 20:29:10.798151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.730 [2024-12-12 20:29:10.798268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:26.730 [2024-12-12 20:29:10.798345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.361 ms 00:18:26.730 [2024-12-12 20:29:10.798403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.730 [2024-12-12 20:29:10.801730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.730 [2024-12-12 20:29:10.801847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:26.730 [2024-12-12 20:29:10.801949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.221 ms 00:18:26.730 [2024-12-12 20:29:10.802048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.730 [2024-12-12 20:29:10.808219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.730 [2024-12-12 20:29:10.808370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:26.730 [2024-12-12 20:29:10.808462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.093 ms 00:18:26.730 [2024-12-12 20:29:10.808511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.730 [2024-12-12 20:29:10.831671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.730 [2024-12-12 20:29:10.831757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:26.730 [2024-12-12 20:29:10.831826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.993 ms 00:18:26.730 [2024-12-12 20:29:10.831943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.730 [2024-12-12 20:29:10.846494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.730 [2024-12-12 20:29:10.846652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:26.730 [2024-12-12 20:29:10.846711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.467 ms 00:18:26.730 [2024-12-12 20:29:10.846755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.730 [2024-12-12 20:29:10.846956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.730 [2024-12-12 20:29:10.847070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:26.730 [2024-12-12 20:29:10.847145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:18:26.730 [2024-12-12 20:29:10.847208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.730 [2024-12-12 20:29:10.870159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.730 [2024-12-12 20:29:10.870250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:26.731 [2024-12-12 20:29:10.870303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.758 ms 00:18:26.731 [2024-12-12 20:29:10.870343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.731 [2024-12-12 20:29:10.892631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.731 [2024-12-12 20:29:10.892787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:26.731 [2024-12-12 20:29:10.892845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.198 ms 00:18:26.731 [2024-12-12 20:29:10.892889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.731 [2024-12-12 20:29:10.914851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.731 [2024-12-12 20:29:10.915041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:26.731 [2024-12-12 20:29:10.915111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.886 ms 00:18:26.731 [2024-12-12 20:29:10.915155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.731 [2024-12-12 20:29:10.937463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.731 [2024-12-12 20:29:10.937563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:26.731 [2024-12-12 20:29:10.937682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.192 ms 00:18:26.731 [2024-12-12 20:29:10.937747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.731 [2024-12-12 20:29:10.937875] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:26.731 [2024-12-12 20:29:10.938045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:26.731 [2024-12-12 20:29:10.938151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:26.731 [2024-12-12 20:29:10.938248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:26.731 [2024-12-12 20:29:10.938349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:26.731 [2024-12-12 20:29:10.938440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:26.731 [2024-12-12 20:29:10.938520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:26.731 [2024-12-12 20:29:10.938623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:26.731 [2024-12-12 20:29:10.938730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:26.731 [2024-12-12 20:29:10.938831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:26.731 [2024-12-12 20:29:10.938926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:26.731 [2024-12-12 20:29:10.939051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:26.731 [2024-12-12 20:29:10.939188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:26.731 [2024-12-12 20:29:10.939336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:26.731 [2024-12-12 20:29:10.939373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:26.731 [2024-12-12 20:29:10.939474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:26.731 [2024-12-12 20:29:10.939544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:26.731 [2024-12-12 20:29:10.939653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:26.731 [2024-12-12 20:29:10.939709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:26.731 [2024-12-12 20:29:10.939806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:26.731 [2024-12-12 20:29:10.939883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:26.731 [2024-12-12 20:29:10.939979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:26.731 [2024-12-12 20:29:10.939993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:26.731 [2024-12-12 20:29:10.940001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:26.731 [2024-12-12 20:29:10.940013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:26.731 [2024-12-12 20:29:10.940021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:26.731 [2024-12-12 20:29:10.940030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:26.731 [2024-12-12 20:29:10.940038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:26.731 [2024-12-12 20:29:10.940047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:26.731 [2024-12-12 20:29:10.940054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:26.731 [2024-12-12 20:29:10.940063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:26.731 [2024-12-12 20:29:10.940070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:26.731 [2024-12-12 20:29:10.940079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:26.731 [2024-12-12 20:29:10.940087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:26.731 [2024-12-12 20:29:10.940095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:26.731 [2024-12-12 20:29:10.940102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:26.731 [2024-12-12 20:29:10.940111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:26.731 [2024-12-12 20:29:10.940119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:26.731 [2024-12-12 20:29:10.940128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:26.731 [2024-12-12 20:29:10.940135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:26.731 [2024-12-12 20:29:10.940145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:26.731 [2024-12-12 20:29:10.940152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:26.731 [2024-12-12 20:29:10.940162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:26.731 [2024-12-12 20:29:10.940169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:26.731 [2024-12-12 20:29:10.940178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:26.731 [2024-12-12 20:29:10.940185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:26.731 [2024-12-12 20:29:10.940195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:26.731 [2024-12-12 20:29:10.940203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:26.731 [2024-12-12 20:29:10.940212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:26.731 [2024-12-12 20:29:10.940219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:26.731 [2024-12-12 20:29:10.940228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:26.731 [2024-12-12 20:29:10.940236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:26.731 [2024-12-12 20:29:10.940245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:26.731 [2024-12-12 20:29:10.940257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:26.731 [2024-12-12 20:29:10.940267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:26.731 [2024-12-12 20:29:10.940274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:26.731 [2024-12-12 20:29:10.940285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:26.731 [2024-12-12 20:29:10.940292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:26.731 [2024-12-12 20:29:10.940302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:26.731 [2024-12-12 20:29:10.940309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:26.731 [2024-12-12 20:29:10.940318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:26.731 [2024-12-12 20:29:10.940325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:26.731 [2024-12-12 20:29:10.940334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:26.731 [2024-12-12 20:29:10.940341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:26.731 [2024-12-12 20:29:10.940350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:26.731 [2024-12-12 20:29:10.940357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:26.731 [2024-12-12 20:29:10.940365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:26.731 [2024-12-12 20:29:10.940373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:26.731 [2024-12-12 20:29:10.940382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:26.731 [2024-12-12 20:29:10.940389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:26.731 [2024-12-12 20:29:10.940398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:26.731 [2024-12-12 20:29:10.940405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:26.731 [2024-12-12 20:29:10.940427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:26.731 [2024-12-12 20:29:10.940436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:26.731 [2024-12-12 20:29:10.940445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:26.731 [2024-12-12 20:29:10.940452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:26.731 [2024-12-12 20:29:10.940461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:26.731 [2024-12-12 20:29:10.940469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:26.731 [2024-12-12 20:29:10.940478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:26.731 [2024-12-12 20:29:10.940485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:26.732 [2024-12-12 20:29:10.940494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:26.732 [2024-12-12 20:29:10.940503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:26.732 [2024-12-12 20:29:10.940525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:26.732 [2024-12-12 20:29:10.940532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:26.732 [2024-12-12 20:29:10.940541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:26.732 [2024-12-12 20:29:10.940552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:26.732 [2024-12-12 20:29:10.940563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:26.732 [2024-12-12 20:29:10.940570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:26.732 [2024-12-12 20:29:10.940581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:26.732 [2024-12-12 20:29:10.940589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:26.732 [2024-12-12 20:29:10.940598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:26.732 [2024-12-12 20:29:10.940606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:26.732 [2024-12-12 20:29:10.940614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:26.732 [2024-12-12 20:29:10.940622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:26.732 [2024-12-12 20:29:10.940631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:26.732 [2024-12-12 20:29:10.940638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:26.732 [2024-12-12 20:29:10.940647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:26.732 [2024-12-12 20:29:10.940654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:26.732 [2024-12-12 20:29:10.940665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:26.732 [2024-12-12 20:29:10.940672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:26.732 [2024-12-12 20:29:10.940681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:26.732 [2024-12-12 20:29:10.940697] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:26.732 [2024-12-12 20:29:10.940706] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 74c47d0a-564e-4b87-bb36-8c50c8e3f62a 00:18:26.732 [2024-12-12 20:29:10.940714] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:26.732 [2024-12-12 20:29:10.940724] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:26.732 [2024-12-12 20:29:10.940731] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:26.732 [2024-12-12 20:29:10.940742] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:26.732 [2024-12-12 20:29:10.940749] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:26.732 [2024-12-12 20:29:10.940758] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:26.732 [2024-12-12 20:29:10.940765] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:26.732 [2024-12-12 20:29:10.940773] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:26.732 [2024-12-12 20:29:10.940780] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:26.732 [2024-12-12 20:29:10.940789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.732 [2024-12-12 20:29:10.940797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:26.732 [2024-12-12 20:29:10.940807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.917 ms 00:18:26.732 [2024-12-12 20:29:10.940814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.732 [2024-12-12 20:29:10.953467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.732 [2024-12-12 20:29:10.953561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:26.732 [2024-12-12 20:29:10.953608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.585 ms 00:18:26.732 [2024-12-12 20:29:10.953630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.732 [2024-12-12 20:29:10.954016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.732 [2024-12-12 20:29:10.954088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:26.732 [2024-12-12 20:29:10.954137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.312 ms 00:18:26.732 [2024-12-12 20:29:10.954186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.993 [2024-12-12 20:29:10.997321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:26.993 [2024-12-12 20:29:10.997448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:26.993 [2024-12-12 20:29:10.997529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:26.993 [2024-12-12 20:29:10.997551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.993 [2024-12-12 20:29:10.997630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:26.993 [2024-12-12 20:29:10.997651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:26.993 [2024-12-12 20:29:10.997671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:26.993 [2024-12-12 20:29:10.997740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.993 [2024-12-12 20:29:10.997848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:26.993 [2024-12-12 20:29:10.997917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:26.993 [2024-12-12 20:29:10.997967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:26.993 [2024-12-12 20:29:10.997989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.993 [2024-12-12 20:29:10.998031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:26.993 [2024-12-12 20:29:10.998051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:26.993 [2024-12-12 20:29:10.998071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:26.993 [2024-12-12 20:29:10.998090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.993 [2024-12-12 20:29:11.078595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:26.993 [2024-12-12 20:29:11.078747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:26.994 [2024-12-12 20:29:11.078799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:26.994 [2024-12-12 20:29:11.078820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.994 [2024-12-12 20:29:11.141019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:26.994 [2024-12-12 20:29:11.141174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:26.994 [2024-12-12 20:29:11.141231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:26.994 [2024-12-12 20:29:11.141254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.994 [2024-12-12 20:29:11.141360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:26.994 [2024-12-12 20:29:11.141447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:26.994 [2024-12-12 20:29:11.141505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:26.994 [2024-12-12 20:29:11.141526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.994 [2024-12-12 20:29:11.141610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:26.994 [2024-12-12 20:29:11.141674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:26.994 [2024-12-12 20:29:11.141699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:26.994 [2024-12-12 20:29:11.141707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.994 [2024-12-12 20:29:11.141817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:26.994 [2024-12-12 20:29:11.141827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:26.994 [2024-12-12 20:29:11.141837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:26.994 [2024-12-12 20:29:11.141846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.994 [2024-12-12 20:29:11.141892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:26.994 [2024-12-12 20:29:11.141901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:26.994 [2024-12-12 20:29:11.141910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:26.994 [2024-12-12 20:29:11.141916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.994 [2024-12-12 20:29:11.141964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:26.994 [2024-12-12 20:29:11.141972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:26.994 [2024-12-12 20:29:11.141981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:26.994 [2024-12-12 20:29:11.141989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.994 [2024-12-12 20:29:11.142041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:26.994 [2024-12-12 20:29:11.142050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:26.994 [2024-12-12 20:29:11.142060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:26.994 [2024-12-12 20:29:11.142067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.994 [2024-12-12 20:29:11.142229] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 347.922 ms, result 0 00:18:26.994 true 00:18:26.994 20:29:11 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 76941 00:18:26.994 20:29:11 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 76941 ']' 00:18:26.994 20:29:11 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 76941 00:18:26.994 20:29:11 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:18:26.994 20:29:11 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:26.994 20:29:11 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76941 00:18:26.994 killing process with pid 76941 00:18:26.994 20:29:11 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:26.994 20:29:11 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:26.994 20:29:11 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76941' 00:18:26.994 20:29:11 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 76941 00:18:26.994 20:29:11 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 76941 00:18:39.279 20:29:22 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:18:39.279 20:29:22 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:39.279 20:29:22 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:18:39.279 20:29:22 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:39.279 20:29:22 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:39.279 20:29:22 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:39.279 20:29:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:39.279 20:29:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:39.279 20:29:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:39.279 20:29:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:39.279 20:29:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:39.279 20:29:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:18:39.279 20:29:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:39.279 20:29:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:39.279 20:29:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:39.279 20:29:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:18:39.279 20:29:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:39.279 20:29:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:39.279 20:29:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:39.279 20:29:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:18:39.279 20:29:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:39.279 20:29:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:39.279 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:18:39.279 fio-3.35 00:18:39.279 Starting 1 thread 00:18:43.485 00:18:43.485 test: (groupid=0, jobs=1): err= 0: pid=77129: Thu Dec 12 20:29:26 2024 00:18:43.485 read: IOPS=1211, BW=80.4MiB/s (84.4MB/s)(255MiB/3164msec) 00:18:43.485 slat (nsec): min=3075, max=21307, avg=4413.96, stdev=1973.29 00:18:43.485 clat (usec): min=251, max=1436, avg=377.16, stdev=149.76 00:18:43.485 lat (usec): min=257, max=1449, avg=381.57, stdev=150.19 00:18:43.485 clat percentiles (usec): 00:18:43.485 | 1.00th=[ 310], 5.00th=[ 318], 10.00th=[ 318], 20.00th=[ 322], 00:18:43.485 | 30.00th=[ 322], 40.00th=[ 326], 50.00th=[ 326], 60.00th=[ 330], 00:18:43.485 | 70.00th=[ 334], 80.00th=[ 347], 90.00th=[ 490], 95.00th=[ 734], 00:18:43.485 | 99.00th=[ 1029], 99.50th=[ 1123], 99.90th=[ 1287], 99.95th=[ 1352], 00:18:43.485 | 99.99th=[ 1434] 00:18:43.485 write: IOPS=1219, BW=81.0MiB/s (84.9MB/s)(256MiB/3161msec); 0 zone resets 00:18:43.485 slat (nsec): min=13859, max=76636, avg=18792.78, stdev=3394.16 00:18:43.485 clat (usec): min=297, max=1610, avg=409.09, stdev=165.70 00:18:43.485 lat (usec): min=316, max=1640, avg=427.89, stdev=166.06 00:18:43.485 clat percentiles (usec): 00:18:43.486 | 1.00th=[ 334], 5.00th=[ 343], 10.00th=[ 343], 20.00th=[ 347], 00:18:43.486 | 30.00th=[ 347], 40.00th=[ 351], 50.00th=[ 351], 60.00th=[ 355], 00:18:43.486 | 70.00th=[ 359], 80.00th=[ 375], 90.00th=[ 562], 95.00th=[ 816], 00:18:43.486 | 99.00th=[ 1106], 99.50th=[ 1254], 99.90th=[ 1418], 99.95th=[ 1582], 00:18:43.486 | 99.99th=[ 1614] 00:18:43.486 bw ( KiB/s): min=59432, max=95608, per=99.36%, avg=82416.00, stdev=15433.69, samples=6 00:18:43.486 iops : min= 874, max= 1406, avg=1212.00, stdev=226.97, samples=6 00:18:43.486 lat (usec) : 500=89.15%, 750=5.72%, 1000=2.95% 00:18:43.486 lat (msec) : 2=2.17% 00:18:43.486 cpu : usr=99.24%, sys=0.13%, ctx=6, majf=0, minf=1167 00:18:43.486 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:43.486 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:43.486 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:43.486 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:43.486 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:43.486 00:18:43.486 Run status group 0 (all jobs): 00:18:43.486 READ: bw=80.4MiB/s (84.4MB/s), 80.4MiB/s-80.4MiB/s (84.4MB/s-84.4MB/s), io=255MiB (267MB), run=3164-3164msec 00:18:43.486 WRITE: bw=81.0MiB/s (84.9MB/s), 81.0MiB/s-81.0MiB/s (84.9MB/s-84.9MB/s), io=256MiB (269MB), run=3161-3161msec 00:18:44.056 ----------------------------------------------------- 00:18:44.056 Suppressions used: 00:18:44.056 count bytes template 00:18:44.056 1 5 /usr/src/fio/parse.c 00:18:44.056 1 8 libtcmalloc_minimal.so 00:18:44.056 1 904 libcrypto.so 00:18:44.056 ----------------------------------------------------- 00:18:44.056 00:18:44.056 20:29:28 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:18:44.056 20:29:28 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:44.056 20:29:28 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:44.315 20:29:28 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:44.315 20:29:28 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:18:44.315 20:29:28 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:44.315 20:29:28 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:44.315 20:29:28 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:44.315 20:29:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:44.315 20:29:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:44.315 20:29:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:44.315 20:29:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:44.315 20:29:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:44.315 20:29:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:18:44.315 20:29:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:44.315 20:29:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:44.315 20:29:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:44.315 20:29:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:18:44.315 20:29:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:44.315 20:29:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:44.315 20:29:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:44.315 20:29:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:18:44.315 20:29:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:44.315 20:29:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:44.315 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:44.315 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:44.315 fio-3.35 00:18:44.315 Starting 2 threads 00:19:10.882 00:19:10.882 first_half: (groupid=0, jobs=1): err= 0: pid=77221: Thu Dec 12 20:29:51 2024 00:19:10.882 read: IOPS=2922, BW=11.4MiB/s (12.0MB/s)(255MiB/22323msec) 00:19:10.882 slat (usec): min=3, max=1170, avg= 4.60, stdev= 4.73 00:19:10.882 clat (usec): min=681, max=264709, avg=33129.98, stdev=17116.08 00:19:10.882 lat (usec): min=685, max=264713, avg=33134.59, stdev=17116.04 00:19:10.882 clat percentiles (msec): 00:19:10.882 | 1.00th=[ 8], 5.00th=[ 21], 10.00th=[ 29], 20.00th=[ 30], 00:19:10.882 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 32], 00:19:10.882 | 70.00th=[ 32], 80.00th=[ 34], 90.00th=[ 37], 95.00th=[ 43], 00:19:10.882 | 99.00th=[ 140], 99.50th=[ 150], 99.90th=[ 199], 99.95th=[ 226], 00:19:10.882 | 99.99th=[ 257] 00:19:10.882 write: IOPS=3476, BW=13.6MiB/s (14.2MB/s)(256MiB/18849msec); 0 zone resets 00:19:10.882 slat (usec): min=3, max=573, avg= 5.96, stdev= 3.97 00:19:10.882 clat (usec): min=369, max=73478, avg=10599.80, stdev=17177.75 00:19:10.882 lat (usec): min=379, max=73482, avg=10605.76, stdev=17177.62 00:19:10.882 clat percentiles (usec): 00:19:10.882 | 1.00th=[ 644], 5.00th=[ 750], 10.00th=[ 857], 20.00th=[ 1172], 00:19:10.882 | 30.00th=[ 2704], 40.00th=[ 3752], 50.00th=[ 4817], 60.00th=[ 5473], 00:19:10.882 | 70.00th=[ 6128], 80.00th=[11076], 90.00th=[29754], 95.00th=[62129], 00:19:10.882 | 99.00th=[67634], 99.50th=[69731], 99.90th=[71828], 99.95th=[71828], 00:19:10.882 | 99.99th=[72877] 00:19:10.882 bw ( KiB/s): min= 832, max=46776, per=78.54%, avg=21845.33, stdev=11596.37, samples=24 00:19:10.882 iops : min= 208, max=11694, avg=5461.33, stdev=2899.09, samples=24 00:19:10.882 lat (usec) : 500=0.02%, 750=2.58%, 1000=4.78% 00:19:10.882 lat (msec) : 2=5.68%, 4=8.20%, 10=18.86%, 20=6.20%, 50=47.40% 00:19:10.882 lat (msec) : 100=5.41%, 250=0.86%, 500=0.01% 00:19:10.882 cpu : usr=99.20%, sys=0.11%, ctx=62, majf=0, minf=5591 00:19:10.882 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:19:10.882 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:10.882 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:10.882 issued rwts: total=65238,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:10.882 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:10.882 second_half: (groupid=0, jobs=1): err= 0: pid=77222: Thu Dec 12 20:29:51 2024 00:19:10.882 read: IOPS=2949, BW=11.5MiB/s (12.1MB/s)(254MiB/22088msec) 00:19:10.882 slat (nsec): min=3076, max=27922, avg=3879.52, stdev=758.17 00:19:10.882 clat (usec): min=633, max=270828, avg=33741.44, stdev=15809.35 00:19:10.882 lat (usec): min=637, max=270833, avg=33745.32, stdev=15809.41 00:19:10.882 clat percentiles (msec): 00:19:10.882 | 1.00th=[ 4], 5.00th=[ 27], 10.00th=[ 29], 20.00th=[ 30], 00:19:10.882 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 32], 00:19:10.882 | 70.00th=[ 32], 80.00th=[ 34], 90.00th=[ 38], 95.00th=[ 45], 00:19:10.882 | 99.00th=[ 124], 99.50th=[ 140], 99.90th=[ 157], 99.95th=[ 163], 00:19:10.882 | 99.99th=[ 264] 00:19:10.882 write: IOPS=4245, BW=16.6MiB/s (17.4MB/s)(256MiB/15437msec); 0 zone resets 00:19:10.882 slat (usec): min=3, max=355, avg= 5.43, stdev= 2.71 00:19:10.882 clat (usec): min=334, max=73879, avg=9585.25, stdev=16853.34 00:19:10.882 lat (usec): min=341, max=73884, avg=9590.69, stdev=16853.36 00:19:10.882 clat percentiles (usec): 00:19:10.882 | 1.00th=[ 660], 5.00th=[ 783], 10.00th=[ 898], 20.00th=[ 1074], 00:19:10.882 | 30.00th=[ 1319], 40.00th=[ 2638], 50.00th=[ 3851], 60.00th=[ 5145], 00:19:10.882 | 70.00th=[ 5997], 80.00th=[10552], 90.00th=[14746], 95.00th=[62129], 00:19:10.882 | 99.00th=[67634], 99.50th=[68682], 99.90th=[71828], 99.95th=[72877], 00:19:10.882 | 99.99th=[73925] 00:19:10.882 bw ( KiB/s): min= 1016, max=45648, per=100.00%, avg=29127.11, stdev=14465.80, samples=18 00:19:10.882 iops : min= 254, max=11412, avg=7281.78, stdev=3616.45, samples=18 00:19:10.882 lat (usec) : 500=0.02%, 750=1.89%, 1000=6.00% 00:19:10.882 lat (msec) : 2=10.01%, 4=8.25%, 10=13.82%, 20=6.65%, 50=46.85% 00:19:10.882 lat (msec) : 100=5.62%, 250=0.88%, 500=0.01% 00:19:10.882 cpu : usr=99.46%, sys=0.08%, ctx=29, majf=0, minf=5514 00:19:10.882 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:19:10.882 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:10.882 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:10.882 issued rwts: total=65142,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:10.882 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:10.882 00:19:10.882 Run status group 0 (all jobs): 00:19:10.882 READ: bw=22.8MiB/s (23.9MB/s), 11.4MiB/s-11.5MiB/s (12.0MB/s-12.1MB/s), io=509MiB (534MB), run=22088-22323msec 00:19:10.882 WRITE: bw=27.2MiB/s (28.5MB/s), 13.6MiB/s-16.6MiB/s (14.2MB/s-17.4MB/s), io=512MiB (537MB), run=15437-18849msec 00:19:10.882 ----------------------------------------------------- 00:19:10.883 Suppressions used: 00:19:10.883 count bytes template 00:19:10.883 2 10 /usr/src/fio/parse.c 00:19:10.883 3 288 /usr/src/fio/iolog.c 00:19:10.883 1 8 libtcmalloc_minimal.so 00:19:10.883 1 904 libcrypto.so 00:19:10.883 ----------------------------------------------------- 00:19:10.883 00:19:10.883 20:29:54 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:19:10.883 20:29:54 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:10.883 20:29:54 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:10.883 20:29:54 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:19:10.883 20:29:54 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:19:10.883 20:29:54 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:10.883 20:29:54 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:10.883 20:29:54 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:19:10.883 20:29:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:19:10.883 20:29:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:10.883 20:29:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:10.883 20:29:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:10.883 20:29:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:10.883 20:29:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:19:10.883 20:29:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:10.883 20:29:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:10.883 20:29:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:10.883 20:29:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:10.883 20:29:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:19:10.883 20:29:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:10.883 20:29:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:10.883 20:29:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:19:10.883 20:29:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:10.883 20:29:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:19:10.883 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:19:10.883 fio-3.35 00:19:10.883 Starting 1 thread 00:19:25.756 00:19:25.756 test: (groupid=0, jobs=1): err= 0: pid=77519: Thu Dec 12 20:30:07 2024 00:19:25.756 read: IOPS=7721, BW=30.2MiB/s (31.6MB/s)(255MiB/8444msec) 00:19:25.756 slat (usec): min=3, max=205, avg= 5.37, stdev= 1.62 00:19:25.756 clat (usec): min=702, max=32587, avg=16567.40, stdev=1566.29 00:19:25.756 lat (usec): min=707, max=32593, avg=16572.78, stdev=1566.28 00:19:25.756 clat percentiles (usec): 00:19:25.756 | 1.00th=[14615], 5.00th=[15401], 10.00th=[15664], 20.00th=[15926], 00:19:25.756 | 30.00th=[16057], 40.00th=[16188], 50.00th=[16319], 60.00th=[16450], 00:19:25.756 | 70.00th=[16581], 80.00th=[16712], 90.00th=[17171], 95.00th=[19268], 00:19:25.756 | 99.00th=[24249], 99.50th=[25297], 99.90th=[29492], 99.95th=[31065], 00:19:25.756 | 99.99th=[31851] 00:19:25.756 write: IOPS=16.6k, BW=64.7MiB/s (67.8MB/s)(256MiB/3959msec); 0 zone resets 00:19:25.756 slat (usec): min=3, max=712, avg= 6.29, stdev= 4.09 00:19:25.756 clat (usec): min=475, max=45487, avg=7691.63, stdev=9319.43 00:19:25.756 lat (usec): min=481, max=45494, avg=7697.92, stdev=9319.49 00:19:25.756 clat percentiles (usec): 00:19:25.756 | 1.00th=[ 619], 5.00th=[ 709], 10.00th=[ 791], 20.00th=[ 914], 00:19:25.756 | 30.00th=[ 1057], 40.00th=[ 1385], 50.00th=[ 5276], 60.00th=[ 6063], 00:19:25.756 | 70.00th=[ 7177], 80.00th=[ 8848], 90.00th=[26084], 95.00th=[29754], 00:19:25.756 | 99.00th=[32637], 99.50th=[33817], 99.90th=[37487], 99.95th=[38536], 00:19:25.756 | 99.99th=[44303] 00:19:25.756 bw ( KiB/s): min=49728, max=90736, per=98.98%, avg=65536.00, stdev=12486.51, samples=8 00:19:25.757 iops : min=12432, max=22684, avg=16384.00, stdev=3121.63, samples=8 00:19:25.757 lat (usec) : 500=0.01%, 750=3.71%, 1000=9.57% 00:19:25.757 lat (msec) : 2=7.44%, 4=0.46%, 10=19.90%, 20=48.94%, 50=9.97% 00:19:25.757 cpu : usr=99.06%, sys=0.19%, ctx=20, majf=0, minf=5563 00:19:25.757 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:19:25.757 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:25.757 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:25.757 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:25.757 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:25.757 00:19:25.757 Run status group 0 (all jobs): 00:19:25.757 READ: bw=30.2MiB/s (31.6MB/s), 30.2MiB/s-30.2MiB/s (31.6MB/s-31.6MB/s), io=255MiB (267MB), run=8444-8444msec 00:19:25.757 WRITE: bw=64.7MiB/s (67.8MB/s), 64.7MiB/s-64.7MiB/s (67.8MB/s-67.8MB/s), io=256MiB (268MB), run=3959-3959msec 00:19:25.757 ----------------------------------------------------- 00:19:25.757 Suppressions used: 00:19:25.757 count bytes template 00:19:25.757 1 5 /usr/src/fio/parse.c 00:19:25.757 2 192 /usr/src/fio/iolog.c 00:19:25.757 1 8 libtcmalloc_minimal.so 00:19:25.757 1 904 libcrypto.so 00:19:25.757 ----------------------------------------------------- 00:19:25.757 00:19:25.757 20:30:09 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:19:25.757 20:30:09 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:25.757 20:30:09 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:25.757 20:30:09 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:25.757 Remove shared memory files 00:19:25.757 20:30:09 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:19:25.757 20:30:09 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:19:25.757 20:30:09 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:19:25.757 20:30:09 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:19:25.757 20:30:09 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid58958 /dev/shm/spdk_tgt_trace.pid75863 00:19:25.757 20:30:09 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:19:25.757 20:30:09 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:19:25.757 ************************************ 00:19:25.757 END TEST ftl_fio_basic 00:19:25.757 ************************************ 00:19:25.757 00:19:25.757 real 1m5.892s 00:19:25.757 user 2m19.842s 00:19:25.757 sys 0m2.610s 00:19:25.757 20:30:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:25.757 20:30:09 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:25.757 20:30:09 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:19:25.757 20:30:09 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:25.757 20:30:09 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:25.757 20:30:09 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:25.757 ************************************ 00:19:25.757 START TEST ftl_bdevperf 00:19:25.757 ************************************ 00:19:25.757 20:30:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:19:25.757 * Looking for test storage... 00:19:25.757 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:25.757 20:30:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:25.757 20:30:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:25.757 20:30:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:19:25.757 20:30:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:25.757 20:30:09 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:25.757 20:30:09 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:25.757 20:30:09 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:25.757 20:30:09 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:19:25.757 20:30:09 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:19:25.757 20:30:09 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:19:25.757 20:30:09 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:19:25.757 20:30:09 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:19:25.757 20:30:09 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:19:25.757 20:30:09 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:19:25.757 20:30:09 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:25.757 20:30:09 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:19:25.757 20:30:09 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:19:25.757 20:30:09 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:25.757 20:30:09 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:25.757 20:30:09 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:19:25.757 20:30:09 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:19:25.757 20:30:09 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:25.757 20:30:09 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:19:25.757 20:30:09 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:19:25.757 20:30:09 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:19:25.757 20:30:09 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:19:25.757 20:30:09 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:25.757 20:30:09 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:19:25.757 20:30:09 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:19:25.757 20:30:09 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:25.757 20:30:09 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:25.757 20:30:09 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:19:25.757 20:30:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:25.757 20:30:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:25.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:25.757 --rc genhtml_branch_coverage=1 00:19:25.757 --rc genhtml_function_coverage=1 00:19:25.757 --rc genhtml_legend=1 00:19:25.757 --rc geninfo_all_blocks=1 00:19:25.757 --rc geninfo_unexecuted_blocks=1 00:19:25.757 00:19:25.757 ' 00:19:25.757 20:30:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:25.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:25.757 --rc genhtml_branch_coverage=1 00:19:25.757 --rc genhtml_function_coverage=1 00:19:25.757 --rc genhtml_legend=1 00:19:25.757 --rc geninfo_all_blocks=1 00:19:25.757 --rc geninfo_unexecuted_blocks=1 00:19:25.757 00:19:25.757 ' 00:19:25.757 20:30:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:25.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:25.757 --rc genhtml_branch_coverage=1 00:19:25.757 --rc genhtml_function_coverage=1 00:19:25.757 --rc genhtml_legend=1 00:19:25.757 --rc geninfo_all_blocks=1 00:19:25.757 --rc geninfo_unexecuted_blocks=1 00:19:25.757 00:19:25.757 ' 00:19:25.757 20:30:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:25.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:25.757 --rc genhtml_branch_coverage=1 00:19:25.757 --rc genhtml_function_coverage=1 00:19:25.757 --rc genhtml_legend=1 00:19:25.757 --rc geninfo_all_blocks=1 00:19:25.757 --rc geninfo_unexecuted_blocks=1 00:19:25.757 00:19:25.757 ' 00:19:25.757 20:30:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:25.757 20:30:09 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:19:25.757 20:30:09 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:25.757 20:30:09 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:25.757 20:30:09 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:25.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:25.757 20:30:09 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:25.757 20:30:09 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:25.757 20:30:09 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:25.757 20:30:09 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:25.757 20:30:09 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:25.757 20:30:09 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:25.757 20:30:09 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:25.757 20:30:09 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:25.757 20:30:09 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:25.757 20:30:09 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:25.757 20:30:09 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:25.757 20:30:09 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:25.757 20:30:09 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:25.757 20:30:09 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:25.757 20:30:09 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:25.757 20:30:09 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:25.757 20:30:09 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:25.757 20:30:09 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:25.757 20:30:09 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:25.757 20:30:09 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:25.757 20:30:09 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:25.757 20:30:09 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:25.757 20:30:09 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:25.757 20:30:09 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:25.757 20:30:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:19:25.757 20:30:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:19:25.757 20:30:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:19:25.757 20:30:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:25.757 20:30:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:19:25.758 20:30:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=77746 00:19:25.758 20:30:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:19:25.758 20:30:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 77746 00:19:25.758 20:30:09 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 77746 ']' 00:19:25.758 20:30:09 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:25.758 20:30:09 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:25.758 20:30:09 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:25.758 20:30:09 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:25.758 20:30:09 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:25.758 20:30:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:19:25.758 [2024-12-12 20:30:09.539151] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:19:25.758 [2024-12-12 20:30:09.539278] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77746 ] 00:19:25.758 [2024-12-12 20:30:09.693073] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:25.758 [2024-12-12 20:30:09.788840] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:26.323 20:30:10 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:26.323 20:30:10 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:19:26.323 20:30:10 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:26.323 20:30:10 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:19:26.323 20:30:10 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:26.323 20:30:10 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:19:26.323 20:30:10 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:19:26.323 20:30:10 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:26.581 20:30:10 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:26.581 20:30:10 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:19:26.581 20:30:10 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:26.581 20:30:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:19:26.581 20:30:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:26.581 20:30:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:26.581 20:30:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:26.582 20:30:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:26.840 20:30:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:26.840 { 00:19:26.840 "name": "nvme0n1", 00:19:26.840 "aliases": [ 00:19:26.840 "7eab7089-edd7-4802-99ed-d2efa3bb755d" 00:19:26.840 ], 00:19:26.840 "product_name": "NVMe disk", 00:19:26.840 "block_size": 4096, 00:19:26.840 "num_blocks": 1310720, 00:19:26.840 "uuid": "7eab7089-edd7-4802-99ed-d2efa3bb755d", 00:19:26.840 "numa_id": -1, 00:19:26.840 "assigned_rate_limits": { 00:19:26.840 "rw_ios_per_sec": 0, 00:19:26.840 "rw_mbytes_per_sec": 0, 00:19:26.840 "r_mbytes_per_sec": 0, 00:19:26.840 "w_mbytes_per_sec": 0 00:19:26.840 }, 00:19:26.840 "claimed": true, 00:19:26.840 "claim_type": "read_many_write_one", 00:19:26.840 "zoned": false, 00:19:26.840 "supported_io_types": { 00:19:26.840 "read": true, 00:19:26.840 "write": true, 00:19:26.840 "unmap": true, 00:19:26.840 "flush": true, 00:19:26.840 "reset": true, 00:19:26.840 "nvme_admin": true, 00:19:26.840 "nvme_io": true, 00:19:26.840 "nvme_io_md": false, 00:19:26.840 "write_zeroes": true, 00:19:26.840 "zcopy": false, 00:19:26.840 "get_zone_info": false, 00:19:26.840 "zone_management": false, 00:19:26.840 "zone_append": false, 00:19:26.840 "compare": true, 00:19:26.840 "compare_and_write": false, 00:19:26.840 "abort": true, 00:19:26.840 "seek_hole": false, 00:19:26.840 "seek_data": false, 00:19:26.840 "copy": true, 00:19:26.840 "nvme_iov_md": false 00:19:26.840 }, 00:19:26.840 "driver_specific": { 00:19:26.840 "nvme": [ 00:19:26.840 { 00:19:26.840 "pci_address": "0000:00:11.0", 00:19:26.840 "trid": { 00:19:26.840 "trtype": "PCIe", 00:19:26.840 "traddr": "0000:00:11.0" 00:19:26.840 }, 00:19:26.840 "ctrlr_data": { 00:19:26.840 "cntlid": 0, 00:19:26.840 "vendor_id": "0x1b36", 00:19:26.840 "model_number": "QEMU NVMe Ctrl", 00:19:26.840 "serial_number": "12341", 00:19:26.840 "firmware_revision": "8.0.0", 00:19:26.840 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:26.840 "oacs": { 00:19:26.840 "security": 0, 00:19:26.840 "format": 1, 00:19:26.840 "firmware": 0, 00:19:26.840 "ns_manage": 1 00:19:26.840 }, 00:19:26.840 "multi_ctrlr": false, 00:19:26.840 "ana_reporting": false 00:19:26.840 }, 00:19:26.840 "vs": { 00:19:26.840 "nvme_version": "1.4" 00:19:26.840 }, 00:19:26.840 "ns_data": { 00:19:26.840 "id": 1, 00:19:26.840 "can_share": false 00:19:26.840 } 00:19:26.840 } 00:19:26.840 ], 00:19:26.840 "mp_policy": "active_passive" 00:19:26.840 } 00:19:26.840 } 00:19:26.840 ]' 00:19:26.840 20:30:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:26.840 20:30:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:26.840 20:30:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:26.840 20:30:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:19:26.840 20:30:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:19:26.840 20:30:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:19:26.840 20:30:10 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:19:26.840 20:30:10 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:26.840 20:30:10 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:19:26.840 20:30:10 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:26.840 20:30:10 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:27.098 20:30:11 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=960bfb9a-f546-48a9-a911-7a7e8dd2ac4d 00:19:27.098 20:30:11 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:19:27.098 20:30:11 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 960bfb9a-f546-48a9-a911-7a7e8dd2ac4d 00:19:27.356 20:30:11 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:27.356 20:30:11 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=559f68f7-9907-4148-a276-545878c8f3cb 00:19:27.356 20:30:11 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 559f68f7-9907-4148-a276-545878c8f3cb 00:19:27.614 20:30:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=9b59ff99-56a2-498b-9abe-65f4157c0fdd 00:19:27.614 20:30:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 9b59ff99-56a2-498b-9abe-65f4157c0fdd 00:19:27.614 20:30:11 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:19:27.614 20:30:11 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:27.614 20:30:11 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=9b59ff99-56a2-498b-9abe-65f4157c0fdd 00:19:27.614 20:30:11 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:19:27.614 20:30:11 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 9b59ff99-56a2-498b-9abe-65f4157c0fdd 00:19:27.614 20:30:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=9b59ff99-56a2-498b-9abe-65f4157c0fdd 00:19:27.614 20:30:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:27.614 20:30:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:27.614 20:30:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:27.614 20:30:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9b59ff99-56a2-498b-9abe-65f4157c0fdd 00:19:27.872 20:30:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:27.872 { 00:19:27.872 "name": "9b59ff99-56a2-498b-9abe-65f4157c0fdd", 00:19:27.872 "aliases": [ 00:19:27.872 "lvs/nvme0n1p0" 00:19:27.872 ], 00:19:27.872 "product_name": "Logical Volume", 00:19:27.872 "block_size": 4096, 00:19:27.872 "num_blocks": 26476544, 00:19:27.872 "uuid": "9b59ff99-56a2-498b-9abe-65f4157c0fdd", 00:19:27.872 "assigned_rate_limits": { 00:19:27.872 "rw_ios_per_sec": 0, 00:19:27.872 "rw_mbytes_per_sec": 0, 00:19:27.872 "r_mbytes_per_sec": 0, 00:19:27.872 "w_mbytes_per_sec": 0 00:19:27.872 }, 00:19:27.872 "claimed": false, 00:19:27.872 "zoned": false, 00:19:27.872 "supported_io_types": { 00:19:27.872 "read": true, 00:19:27.872 "write": true, 00:19:27.872 "unmap": true, 00:19:27.872 "flush": false, 00:19:27.872 "reset": true, 00:19:27.872 "nvme_admin": false, 00:19:27.872 "nvme_io": false, 00:19:27.872 "nvme_io_md": false, 00:19:27.872 "write_zeroes": true, 00:19:27.872 "zcopy": false, 00:19:27.872 "get_zone_info": false, 00:19:27.872 "zone_management": false, 00:19:27.872 "zone_append": false, 00:19:27.872 "compare": false, 00:19:27.872 "compare_and_write": false, 00:19:27.872 "abort": false, 00:19:27.872 "seek_hole": true, 00:19:27.872 "seek_data": true, 00:19:27.872 "copy": false, 00:19:27.872 "nvme_iov_md": false 00:19:27.872 }, 00:19:27.872 "driver_specific": { 00:19:27.872 "lvol": { 00:19:27.872 "lvol_store_uuid": "559f68f7-9907-4148-a276-545878c8f3cb", 00:19:27.872 "base_bdev": "nvme0n1", 00:19:27.872 "thin_provision": true, 00:19:27.872 "num_allocated_clusters": 0, 00:19:27.872 "snapshot": false, 00:19:27.872 "clone": false, 00:19:27.872 "esnap_clone": false 00:19:27.872 } 00:19:27.872 } 00:19:27.872 } 00:19:27.872 ]' 00:19:27.872 20:30:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:27.872 20:30:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:27.872 20:30:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:27.872 20:30:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:27.872 20:30:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:27.872 20:30:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:19:27.872 20:30:11 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:19:27.872 20:30:11 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:19:27.872 20:30:11 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:28.130 20:30:12 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:28.130 20:30:12 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:28.130 20:30:12 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 9b59ff99-56a2-498b-9abe-65f4157c0fdd 00:19:28.130 20:30:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=9b59ff99-56a2-498b-9abe-65f4157c0fdd 00:19:28.130 20:30:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:28.130 20:30:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:28.130 20:30:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:28.130 20:30:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9b59ff99-56a2-498b-9abe-65f4157c0fdd 00:19:28.389 20:30:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:28.389 { 00:19:28.389 "name": "9b59ff99-56a2-498b-9abe-65f4157c0fdd", 00:19:28.389 "aliases": [ 00:19:28.389 "lvs/nvme0n1p0" 00:19:28.389 ], 00:19:28.389 "product_name": "Logical Volume", 00:19:28.389 "block_size": 4096, 00:19:28.389 "num_blocks": 26476544, 00:19:28.389 "uuid": "9b59ff99-56a2-498b-9abe-65f4157c0fdd", 00:19:28.389 "assigned_rate_limits": { 00:19:28.389 "rw_ios_per_sec": 0, 00:19:28.389 "rw_mbytes_per_sec": 0, 00:19:28.389 "r_mbytes_per_sec": 0, 00:19:28.389 "w_mbytes_per_sec": 0 00:19:28.389 }, 00:19:28.389 "claimed": false, 00:19:28.389 "zoned": false, 00:19:28.389 "supported_io_types": { 00:19:28.389 "read": true, 00:19:28.389 "write": true, 00:19:28.389 "unmap": true, 00:19:28.389 "flush": false, 00:19:28.389 "reset": true, 00:19:28.389 "nvme_admin": false, 00:19:28.389 "nvme_io": false, 00:19:28.389 "nvme_io_md": false, 00:19:28.389 "write_zeroes": true, 00:19:28.389 "zcopy": false, 00:19:28.389 "get_zone_info": false, 00:19:28.389 "zone_management": false, 00:19:28.389 "zone_append": false, 00:19:28.389 "compare": false, 00:19:28.389 "compare_and_write": false, 00:19:28.389 "abort": false, 00:19:28.389 "seek_hole": true, 00:19:28.389 "seek_data": true, 00:19:28.389 "copy": false, 00:19:28.389 "nvme_iov_md": false 00:19:28.389 }, 00:19:28.389 "driver_specific": { 00:19:28.389 "lvol": { 00:19:28.389 "lvol_store_uuid": "559f68f7-9907-4148-a276-545878c8f3cb", 00:19:28.389 "base_bdev": "nvme0n1", 00:19:28.389 "thin_provision": true, 00:19:28.389 "num_allocated_clusters": 0, 00:19:28.389 "snapshot": false, 00:19:28.389 "clone": false, 00:19:28.389 "esnap_clone": false 00:19:28.389 } 00:19:28.389 } 00:19:28.389 } 00:19:28.389 ]' 00:19:28.389 20:30:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:28.389 20:30:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:28.389 20:30:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:28.389 20:30:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:28.389 20:30:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:28.389 20:30:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:19:28.389 20:30:12 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:19:28.389 20:30:12 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:28.647 20:30:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:19:28.647 20:30:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 9b59ff99-56a2-498b-9abe-65f4157c0fdd 00:19:28.647 20:30:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=9b59ff99-56a2-498b-9abe-65f4157c0fdd 00:19:28.647 20:30:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:28.647 20:30:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:28.647 20:30:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:28.647 20:30:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9b59ff99-56a2-498b-9abe-65f4157c0fdd 00:19:28.905 20:30:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:28.905 { 00:19:28.905 "name": "9b59ff99-56a2-498b-9abe-65f4157c0fdd", 00:19:28.905 "aliases": [ 00:19:28.905 "lvs/nvme0n1p0" 00:19:28.905 ], 00:19:28.905 "product_name": "Logical Volume", 00:19:28.905 "block_size": 4096, 00:19:28.905 "num_blocks": 26476544, 00:19:28.905 "uuid": "9b59ff99-56a2-498b-9abe-65f4157c0fdd", 00:19:28.905 "assigned_rate_limits": { 00:19:28.905 "rw_ios_per_sec": 0, 00:19:28.905 "rw_mbytes_per_sec": 0, 00:19:28.905 "r_mbytes_per_sec": 0, 00:19:28.905 "w_mbytes_per_sec": 0 00:19:28.905 }, 00:19:28.905 "claimed": false, 00:19:28.905 "zoned": false, 00:19:28.905 "supported_io_types": { 00:19:28.905 "read": true, 00:19:28.905 "write": true, 00:19:28.905 "unmap": true, 00:19:28.905 "flush": false, 00:19:28.905 "reset": true, 00:19:28.905 "nvme_admin": false, 00:19:28.905 "nvme_io": false, 00:19:28.905 "nvme_io_md": false, 00:19:28.905 "write_zeroes": true, 00:19:28.905 "zcopy": false, 00:19:28.905 "get_zone_info": false, 00:19:28.905 "zone_management": false, 00:19:28.905 "zone_append": false, 00:19:28.905 "compare": false, 00:19:28.905 "compare_and_write": false, 00:19:28.905 "abort": false, 00:19:28.905 "seek_hole": true, 00:19:28.905 "seek_data": true, 00:19:28.905 "copy": false, 00:19:28.905 "nvme_iov_md": false 00:19:28.905 }, 00:19:28.905 "driver_specific": { 00:19:28.905 "lvol": { 00:19:28.905 "lvol_store_uuid": "559f68f7-9907-4148-a276-545878c8f3cb", 00:19:28.905 "base_bdev": "nvme0n1", 00:19:28.905 "thin_provision": true, 00:19:28.905 "num_allocated_clusters": 0, 00:19:28.905 "snapshot": false, 00:19:28.905 "clone": false, 00:19:28.905 "esnap_clone": false 00:19:28.905 } 00:19:28.905 } 00:19:28.906 } 00:19:28.906 ]' 00:19:28.906 20:30:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:28.906 20:30:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:28.906 20:30:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:28.906 20:30:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:28.906 20:30:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:28.906 20:30:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:19:28.906 20:30:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:19:28.906 20:30:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 9b59ff99-56a2-498b-9abe-65f4157c0fdd -c nvc0n1p0 --l2p_dram_limit 20 00:19:29.164 [2024-12-12 20:30:13.145872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.164 [2024-12-12 20:30:13.145995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:29.165 [2024-12-12 20:30:13.146011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:29.165 [2024-12-12 20:30:13.146019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.165 [2024-12-12 20:30:13.146067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.165 [2024-12-12 20:30:13.146076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:29.165 [2024-12-12 20:30:13.146082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:19:29.165 [2024-12-12 20:30:13.146089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.165 [2024-12-12 20:30:13.146103] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:29.165 [2024-12-12 20:30:13.146694] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:29.165 [2024-12-12 20:30:13.146708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.165 [2024-12-12 20:30:13.146715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:29.165 [2024-12-12 20:30:13.146722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.609 ms 00:19:29.165 [2024-12-12 20:30:13.146729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.165 [2024-12-12 20:30:13.146774] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 469f2038-b3df-41ff-9328-07a984288948 00:19:29.165 [2024-12-12 20:30:13.147734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.165 [2024-12-12 20:30:13.147755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:29.165 [2024-12-12 20:30:13.147768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:19:29.165 [2024-12-12 20:30:13.147774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.165 [2024-12-12 20:30:13.152733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.165 [2024-12-12 20:30:13.152833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:29.165 [2024-12-12 20:30:13.152848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.930 ms 00:19:29.165 [2024-12-12 20:30:13.152856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.165 [2024-12-12 20:30:13.152923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.165 [2024-12-12 20:30:13.152930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:29.165 [2024-12-12 20:30:13.152941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:19:29.165 [2024-12-12 20:30:13.152947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.165 [2024-12-12 20:30:13.152985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.165 [2024-12-12 20:30:13.152993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:29.165 [2024-12-12 20:30:13.153000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:29.165 [2024-12-12 20:30:13.153006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.165 [2024-12-12 20:30:13.153024] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:29.165 [2024-12-12 20:30:13.155937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.165 [2024-12-12 20:30:13.156036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:29.165 [2024-12-12 20:30:13.156047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.921 ms 00:19:29.165 [2024-12-12 20:30:13.156059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.165 [2024-12-12 20:30:13.156085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.165 [2024-12-12 20:30:13.156093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:29.165 [2024-12-12 20:30:13.156100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:29.165 [2024-12-12 20:30:13.156107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.165 [2024-12-12 20:30:13.156124] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:29.165 [2024-12-12 20:30:13.156237] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:29.165 [2024-12-12 20:30:13.156246] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:29.165 [2024-12-12 20:30:13.156256] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:29.165 [2024-12-12 20:30:13.156264] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:29.165 [2024-12-12 20:30:13.156272] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:29.165 [2024-12-12 20:30:13.156278] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:29.165 [2024-12-12 20:30:13.156285] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:29.165 [2024-12-12 20:30:13.156291] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:29.165 [2024-12-12 20:30:13.156298] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:29.165 [2024-12-12 20:30:13.156313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.165 [2024-12-12 20:30:13.156320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:29.165 [2024-12-12 20:30:13.156326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.190 ms 00:19:29.165 [2024-12-12 20:30:13.156333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.165 [2024-12-12 20:30:13.156396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.165 [2024-12-12 20:30:13.156404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:29.165 [2024-12-12 20:30:13.156410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:19:29.165 [2024-12-12 20:30:13.156434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.165 [2024-12-12 20:30:13.156503] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:29.165 [2024-12-12 20:30:13.156513] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:29.165 [2024-12-12 20:30:13.156520] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:29.165 [2024-12-12 20:30:13.156528] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:29.165 [2024-12-12 20:30:13.156533] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:29.165 [2024-12-12 20:30:13.156540] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:29.165 [2024-12-12 20:30:13.156545] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:29.165 [2024-12-12 20:30:13.156552] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:29.165 [2024-12-12 20:30:13.156560] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:29.165 [2024-12-12 20:30:13.156566] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:29.165 [2024-12-12 20:30:13.156571] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:29.165 [2024-12-12 20:30:13.156583] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:29.165 [2024-12-12 20:30:13.156588] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:29.165 [2024-12-12 20:30:13.156595] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:29.165 [2024-12-12 20:30:13.156599] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:29.165 [2024-12-12 20:30:13.156608] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:29.165 [2024-12-12 20:30:13.156613] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:29.165 [2024-12-12 20:30:13.156619] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:29.165 [2024-12-12 20:30:13.156624] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:29.165 [2024-12-12 20:30:13.156630] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:29.165 [2024-12-12 20:30:13.156635] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:29.165 [2024-12-12 20:30:13.156641] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:29.165 [2024-12-12 20:30:13.156646] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:29.165 [2024-12-12 20:30:13.156652] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:29.165 [2024-12-12 20:30:13.156656] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:29.165 [2024-12-12 20:30:13.156662] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:29.165 [2024-12-12 20:30:13.156667] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:29.165 [2024-12-12 20:30:13.156673] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:29.165 [2024-12-12 20:30:13.156678] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:29.165 [2024-12-12 20:30:13.156684] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:29.165 [2024-12-12 20:30:13.156688] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:29.165 [2024-12-12 20:30:13.156696] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:29.165 [2024-12-12 20:30:13.156701] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:29.165 [2024-12-12 20:30:13.156707] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:29.165 [2024-12-12 20:30:13.156712] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:29.165 [2024-12-12 20:30:13.156718] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:29.165 [2024-12-12 20:30:13.156722] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:29.165 [2024-12-12 20:30:13.156729] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:29.165 [2024-12-12 20:30:13.156734] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:29.165 [2024-12-12 20:30:13.156741] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:29.165 [2024-12-12 20:30:13.156748] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:29.165 [2024-12-12 20:30:13.156754] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:29.165 [2024-12-12 20:30:13.156759] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:29.165 [2024-12-12 20:30:13.156765] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:29.165 [2024-12-12 20:30:13.156770] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:29.165 [2024-12-12 20:30:13.156777] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:29.165 [2024-12-12 20:30:13.156782] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:29.165 [2024-12-12 20:30:13.156792] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:29.165 [2024-12-12 20:30:13.156797] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:29.165 [2024-12-12 20:30:13.156803] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:29.166 [2024-12-12 20:30:13.156808] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:29.166 [2024-12-12 20:30:13.156814] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:29.166 [2024-12-12 20:30:13.156819] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:29.166 [2024-12-12 20:30:13.156826] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:29.166 [2024-12-12 20:30:13.156834] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:29.166 [2024-12-12 20:30:13.156841] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:29.166 [2024-12-12 20:30:13.156847] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:29.166 [2024-12-12 20:30:13.156853] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:29.166 [2024-12-12 20:30:13.156858] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:29.166 [2024-12-12 20:30:13.156864] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:29.166 [2024-12-12 20:30:13.156869] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:29.166 [2024-12-12 20:30:13.156876] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:29.166 [2024-12-12 20:30:13.156881] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:29.166 [2024-12-12 20:30:13.156890] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:29.166 [2024-12-12 20:30:13.156895] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:29.166 [2024-12-12 20:30:13.156902] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:29.166 [2024-12-12 20:30:13.156907] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:29.166 [2024-12-12 20:30:13.156914] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:29.166 [2024-12-12 20:30:13.156919] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:29.166 [2024-12-12 20:30:13.156926] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:29.166 [2024-12-12 20:30:13.156931] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:29.166 [2024-12-12 20:30:13.156940] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:29.166 [2024-12-12 20:30:13.156947] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:29.166 [2024-12-12 20:30:13.156954] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:29.166 [2024-12-12 20:30:13.156959] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:29.166 [2024-12-12 20:30:13.156966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.166 [2024-12-12 20:30:13.156971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:29.166 [2024-12-12 20:30:13.156978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.512 ms 00:19:29.166 [2024-12-12 20:30:13.156983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.166 [2024-12-12 20:30:13.157020] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:29.166 [2024-12-12 20:30:13.157028] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:31.065 [2024-12-12 20:30:15.226583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.065 [2024-12-12 20:30:15.226787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:31.065 [2024-12-12 20:30:15.226811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2069.553 ms 00:19:31.065 [2024-12-12 20:30:15.226821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.065 [2024-12-12 20:30:15.251460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.065 [2024-12-12 20:30:15.251495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:31.065 [2024-12-12 20:30:15.251508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.442 ms 00:19:31.065 [2024-12-12 20:30:15.251516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.065 [2024-12-12 20:30:15.251630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.065 [2024-12-12 20:30:15.251641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:31.065 [2024-12-12 20:30:15.251653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:19:31.065 [2024-12-12 20:30:15.251660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.323 [2024-12-12 20:30:15.294138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.323 [2024-12-12 20:30:15.294176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:31.323 [2024-12-12 20:30:15.294190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.443 ms 00:19:31.323 [2024-12-12 20:30:15.294198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.323 [2024-12-12 20:30:15.294235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.323 [2024-12-12 20:30:15.294245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:31.323 [2024-12-12 20:30:15.294255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:31.323 [2024-12-12 20:30:15.294264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.323 [2024-12-12 20:30:15.294625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.323 [2024-12-12 20:30:15.294640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:31.323 [2024-12-12 20:30:15.294650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.310 ms 00:19:31.323 [2024-12-12 20:30:15.294657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.323 [2024-12-12 20:30:15.294761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.323 [2024-12-12 20:30:15.294774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:31.323 [2024-12-12 20:30:15.294786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:19:31.323 [2024-12-12 20:30:15.294793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.323 [2024-12-12 20:30:15.307549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.323 [2024-12-12 20:30:15.307578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:31.323 [2024-12-12 20:30:15.307589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.738 ms 00:19:31.323 [2024-12-12 20:30:15.307603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.323 [2024-12-12 20:30:15.318883] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:19:31.323 [2024-12-12 20:30:15.324004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.323 [2024-12-12 20:30:15.324036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:31.323 [2024-12-12 20:30:15.324047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.342 ms 00:19:31.323 [2024-12-12 20:30:15.324056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.323 [2024-12-12 20:30:15.380094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.323 [2024-12-12 20:30:15.380132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:31.323 [2024-12-12 20:30:15.380144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.016 ms 00:19:31.323 [2024-12-12 20:30:15.380153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.323 [2024-12-12 20:30:15.380327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.323 [2024-12-12 20:30:15.380342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:31.323 [2024-12-12 20:30:15.380350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.141 ms 00:19:31.323 [2024-12-12 20:30:15.380362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.323 [2024-12-12 20:30:15.403251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.323 [2024-12-12 20:30:15.403285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:31.323 [2024-12-12 20:30:15.403296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.850 ms 00:19:31.323 [2024-12-12 20:30:15.403306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.323 [2024-12-12 20:30:15.425365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.323 [2024-12-12 20:30:15.425397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:31.323 [2024-12-12 20:30:15.425408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.029 ms 00:19:31.323 [2024-12-12 20:30:15.425431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.323 [2024-12-12 20:30:15.425976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.323 [2024-12-12 20:30:15.425991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:31.324 [2024-12-12 20:30:15.425999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.506 ms 00:19:31.324 [2024-12-12 20:30:15.426008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.324 [2024-12-12 20:30:15.490148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.324 [2024-12-12 20:30:15.490187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:31.324 [2024-12-12 20:30:15.490198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.098 ms 00:19:31.324 [2024-12-12 20:30:15.490207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.324 [2024-12-12 20:30:15.513501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.324 [2024-12-12 20:30:15.513536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:31.324 [2024-12-12 20:30:15.513548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.235 ms 00:19:31.324 [2024-12-12 20:30:15.513558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.324 [2024-12-12 20:30:15.535914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.324 [2024-12-12 20:30:15.535944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:31.324 [2024-12-12 20:30:15.535954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.326 ms 00:19:31.324 [2024-12-12 20:30:15.535964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.581 [2024-12-12 20:30:15.559089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.581 [2024-12-12 20:30:15.559123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:31.581 [2024-12-12 20:30:15.559134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.095 ms 00:19:31.581 [2024-12-12 20:30:15.559143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.581 [2024-12-12 20:30:15.559177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.581 [2024-12-12 20:30:15.559190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:31.581 [2024-12-12 20:30:15.559199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:31.581 [2024-12-12 20:30:15.559207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.581 [2024-12-12 20:30:15.559276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.581 [2024-12-12 20:30:15.559288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:31.581 [2024-12-12 20:30:15.559296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:19:31.581 [2024-12-12 20:30:15.559305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.581 [2024-12-12 20:30:15.560092] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2413.833 ms, result 0 00:19:31.581 { 00:19:31.581 "name": "ftl0", 00:19:31.581 "uuid": "469f2038-b3df-41ff-9328-07a984288948" 00:19:31.581 } 00:19:31.581 20:30:15 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:19:31.581 20:30:15 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:19:31.581 20:30:15 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:19:31.581 20:30:15 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:19:31.859 [2024-12-12 20:30:15.856470] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:31.859 I/O size of 69632 is greater than zero copy threshold (65536). 00:19:31.859 Zero copy mechanism will not be used. 00:19:31.859 Running I/O for 4 seconds... 00:19:33.755 3035.00 IOPS, 201.54 MiB/s [2024-12-12T20:30:18.916Z] 3148.00 IOPS, 209.05 MiB/s [2024-12-12T20:30:20.288Z] 3165.00 IOPS, 210.18 MiB/s [2024-12-12T20:30:20.288Z] 3126.25 IOPS, 207.60 MiB/s 00:19:36.060 Latency(us) 00:19:36.060 [2024-12-12T20:30:20.288Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:36.060 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:19:36.060 ftl0 : 4.00 3125.01 207.52 0.00 0.00 336.80 169.35 2092.11 00:19:36.060 [2024-12-12T20:30:20.288Z] =================================================================================================================== 00:19:36.060 [2024-12-12T20:30:20.288Z] Total : 3125.01 207.52 0.00 0.00 336.80 169.35 2092.11 00:19:36.060 [2024-12-12 20:30:19.866331] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:36.060 { 00:19:36.060 "results": [ 00:19:36.060 { 00:19:36.060 "job": "ftl0", 00:19:36.060 "core_mask": "0x1", 00:19:36.060 "workload": "randwrite", 00:19:36.060 "status": "finished", 00:19:36.060 "queue_depth": 1, 00:19:36.060 "io_size": 69632, 00:19:36.060 "runtime": 4.001907, 00:19:36.060 "iops": 3125.0101514103153, 00:19:36.060 "mibps": 207.52020536709125, 00:19:36.060 "io_failed": 0, 00:19:36.060 "io_timeout": 0, 00:19:36.060 "avg_latency_us": 336.7996201208035, 00:19:36.060 "min_latency_us": 169.35384615384615, 00:19:36.060 "max_latency_us": 2092.110769230769 00:19:36.060 } 00:19:36.060 ], 00:19:36.060 "core_count": 1 00:19:36.060 } 00:19:36.060 20:30:19 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:19:36.060 [2024-12-12 20:30:19.981595] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:36.060 Running I/O for 4 seconds... 00:19:37.929 10920.00 IOPS, 42.66 MiB/s [2024-12-12T20:30:23.092Z] 10757.50 IOPS, 42.02 MiB/s [2024-12-12T20:30:24.027Z] 10645.67 IOPS, 41.58 MiB/s [2024-12-12T20:30:24.027Z] 10579.50 IOPS, 41.33 MiB/s 00:19:39.799 Latency(us) 00:19:39.799 [2024-12-12T20:30:24.027Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:39.799 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:19:39.799 ftl0 : 4.01 10574.50 41.31 0.00 0.00 12081.94 234.73 32465.53 00:19:39.799 [2024-12-12T20:30:24.027Z] =================================================================================================================== 00:19:39.799 [2024-12-12T20:30:24.027Z] Total : 10574.50 41.31 0.00 0.00 12081.94 0.00 32465.53 00:19:39.799 { 00:19:39.799 "results": [ 00:19:39.799 { 00:19:39.799 "job": "ftl0", 00:19:39.799 "core_mask": "0x1", 00:19:39.799 "workload": "randwrite", 00:19:39.799 "status": "finished", 00:19:39.799 "queue_depth": 128, 00:19:39.799 "io_size": 4096, 00:19:39.799 "runtime": 4.013997, 00:19:39.799 "iops": 10574.497190705424, 00:19:39.799 "mibps": 41.30662965119306, 00:19:39.799 "io_failed": 0, 00:19:39.799 "io_timeout": 0, 00:19:39.799 "avg_latency_us": 12081.94317369762, 00:19:39.799 "min_latency_us": 234.7323076923077, 00:19:39.799 "max_latency_us": 32465.526153846153 00:19:39.799 } 00:19:39.799 ], 00:19:39.799 "core_count": 1 00:19:39.799 } 00:19:39.799 [2024-12-12 20:30:24.004489] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:39.799 20:30:24 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:19:40.059 [2024-12-12 20:30:24.110572] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:40.059 Running I/O for 4 seconds... 00:19:41.937 8342.00 IOPS, 32.59 MiB/s [2024-12-12T20:30:27.540Z] 8482.50 IOPS, 33.13 MiB/s [2024-12-12T20:30:28.474Z] 8520.00 IOPS, 33.28 MiB/s [2024-12-12T20:30:28.474Z] 8561.00 IOPS, 33.44 MiB/s 00:19:44.246 Latency(us) 00:19:44.246 [2024-12-12T20:30:28.474Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:44.246 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:44.246 Verification LBA range: start 0x0 length 0x1400000 00:19:44.246 ftl0 : 4.01 8572.26 33.49 0.00 0.00 14884.99 225.28 23592.96 00:19:44.246 [2024-12-12T20:30:28.474Z] =================================================================================================================== 00:19:44.246 [2024-12-12T20:30:28.474Z] Total : 8572.26 33.49 0.00 0.00 14884.99 0.00 23592.96 00:19:44.246 [2024-12-12 20:30:28.135250] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ft{ 00:19:44.246 "results": [ 00:19:44.246 { 00:19:44.246 "job": "ftl0", 00:19:44.246 "core_mask": "0x1", 00:19:44.247 "workload": "verify", 00:19:44.247 "status": "finished", 00:19:44.247 "verify_range": { 00:19:44.247 "start": 0, 00:19:44.247 "length": 20971520 00:19:44.247 }, 00:19:44.247 "queue_depth": 128, 00:19:44.247 "io_size": 4096, 00:19:44.247 "runtime": 4.009446, 00:19:44.247 "iops": 8572.256616001312, 00:19:44.247 "mibps": 33.485377406255125, 00:19:44.247 "io_failed": 0, 00:19:44.247 "io_timeout": 0, 00:19:44.247 "avg_latency_us": 14884.992890937981, 00:19:44.247 "min_latency_us": 225.28, 00:19:44.247 "max_latency_us": 23592.96 00:19:44.247 } 00:19:44.247 ], 00:19:44.247 "core_count": 1 00:19:44.247 } 00:19:44.247 l0 00:19:44.247 20:30:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:19:44.247 [2024-12-12 20:30:28.342284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.247 [2024-12-12 20:30:28.342536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:44.247 [2024-12-12 20:30:28.342612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:44.247 [2024-12-12 20:30:28.342640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.247 [2024-12-12 20:30:28.342683] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:44.247 [2024-12-12 20:30:28.345668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.247 [2024-12-12 20:30:28.345781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:44.247 [2024-12-12 20:30:28.345843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.891 ms 00:19:44.247 [2024-12-12 20:30:28.345867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.247 [2024-12-12 20:30:28.347453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.247 [2024-12-12 20:30:28.347560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:44.247 [2024-12-12 20:30:28.347621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.548 ms 00:19:44.247 [2024-12-12 20:30:28.347651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.506 [2024-12-12 20:30:28.492160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.506 [2024-12-12 20:30:28.492369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:44.506 [2024-12-12 20:30:28.492398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 144.463 ms 00:19:44.506 [2024-12-12 20:30:28.492408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.506 [2024-12-12 20:30:28.498555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.506 [2024-12-12 20:30:28.498587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:44.506 [2024-12-12 20:30:28.498602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.094 ms 00:19:44.506 [2024-12-12 20:30:28.498614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.506 [2024-12-12 20:30:28.522407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.506 [2024-12-12 20:30:28.522451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:44.506 [2024-12-12 20:30:28.522466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.728 ms 00:19:44.506 [2024-12-12 20:30:28.522474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.506 [2024-12-12 20:30:28.537355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.506 [2024-12-12 20:30:28.537399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:44.506 [2024-12-12 20:30:28.537426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.844 ms 00:19:44.506 [2024-12-12 20:30:28.537435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.506 [2024-12-12 20:30:28.537605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.506 [2024-12-12 20:30:28.537618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:44.506 [2024-12-12 20:30:28.537631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 00:19:44.506 [2024-12-12 20:30:28.537639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.506 [2024-12-12 20:30:28.560381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.506 [2024-12-12 20:30:28.560567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:44.506 [2024-12-12 20:30:28.560589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.725 ms 00:19:44.506 [2024-12-12 20:30:28.560598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.506 [2024-12-12 20:30:28.583264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.506 [2024-12-12 20:30:28.583297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:44.506 [2024-12-12 20:30:28.583310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.632 ms 00:19:44.506 [2024-12-12 20:30:28.583318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.506 [2024-12-12 20:30:28.605112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.506 [2024-12-12 20:30:28.605143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:44.506 [2024-12-12 20:30:28.605155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.759 ms 00:19:44.506 [2024-12-12 20:30:28.605163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.506 [2024-12-12 20:30:28.627651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.506 [2024-12-12 20:30:28.627768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:44.506 [2024-12-12 20:30:28.627829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.417 ms 00:19:44.506 [2024-12-12 20:30:28.627852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.506 [2024-12-12 20:30:28.627922] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:44.506 [2024-12-12 20:30:28.628016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:44.506 [2024-12-12 20:30:28.628031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:44.506 [2024-12-12 20:30:28.628040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:44.506 [2024-12-12 20:30:28.628050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:44.506 [2024-12-12 20:30:28.628058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:44.506 [2024-12-12 20:30:28.628068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:44.506 [2024-12-12 20:30:28.628076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:44.506 [2024-12-12 20:30:28.628086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:44.506 [2024-12-12 20:30:28.628094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:44.506 [2024-12-12 20:30:28.628104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:44.506 [2024-12-12 20:30:28.628111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:44.506 [2024-12-12 20:30:28.628121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:44.506 [2024-12-12 20:30:28.628128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:44.506 [2024-12-12 20:30:28.628140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:44.506 [2024-12-12 20:30:28.628147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:44.506 [2024-12-12 20:30:28.628157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:44.506 [2024-12-12 20:30:28.628164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:44.506 [2024-12-12 20:30:28.628175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:44.506 [2024-12-12 20:30:28.628183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:44.506 [2024-12-12 20:30:28.628192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:44.506 [2024-12-12 20:30:28.628199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:44.506 [2024-12-12 20:30:28.628208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:44.506 [2024-12-12 20:30:28.628217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:44.506 [2024-12-12 20:30:28.628226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:44.506 [2024-12-12 20:30:28.628234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:44.506 [2024-12-12 20:30:28.628243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:44.506 [2024-12-12 20:30:28.628251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:44.506 [2024-12-12 20:30:28.628260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:44.506 [2024-12-12 20:30:28.628267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:44.507 [2024-12-12 20:30:28.628282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:44.507 [2024-12-12 20:30:28.628291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:44.507 [2024-12-12 20:30:28.628301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:44.507 [2024-12-12 20:30:28.628309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:44.507 [2024-12-12 20:30:28.628318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:44.507 [2024-12-12 20:30:28.628336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:44.507 [2024-12-12 20:30:28.628345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:44.507 [2024-12-12 20:30:28.628353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:44.507 [2024-12-12 20:30:28.628362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:44.507 [2024-12-12 20:30:28.628379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:44.507 [2024-12-12 20:30:28.628388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:44.507 [2024-12-12 20:30:28.628396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:44.507 [2024-12-12 20:30:28.628405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:44.507 [2024-12-12 20:30:28.628427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:44.507 [2024-12-12 20:30:28.628439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:44.507 [2024-12-12 20:30:28.628447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:44.507 [2024-12-12 20:30:28.628458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:44.507 [2024-12-12 20:30:28.628466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:44.507 [2024-12-12 20:30:28.628476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:44.507 [2024-12-12 20:30:28.628483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:44.507 [2024-12-12 20:30:28.628493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:44.507 [2024-12-12 20:30:28.628501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:44.507 [2024-12-12 20:30:28.628510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:44.507 [2024-12-12 20:30:28.628517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:44.507 [2024-12-12 20:30:28.628527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:44.507 [2024-12-12 20:30:28.628535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:44.507 [2024-12-12 20:30:28.628545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:44.507 [2024-12-12 20:30:28.628552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:44.507 [2024-12-12 20:30:28.628562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:44.507 [2024-12-12 20:30:28.628569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:44.507 [2024-12-12 20:30:28.628579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:44.507 [2024-12-12 20:30:28.628587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:44.507 [2024-12-12 20:30:28.628599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:44.507 [2024-12-12 20:30:28.628607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:44.507 [2024-12-12 20:30:28.628616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:44.507 [2024-12-12 20:30:28.628624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:44.507 [2024-12-12 20:30:28.628633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:44.507 [2024-12-12 20:30:28.628641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:44.507 [2024-12-12 20:30:28.628650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:44.507 [2024-12-12 20:30:28.628658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:44.507 [2024-12-12 20:30:28.628670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:44.507 [2024-12-12 20:30:28.628677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:44.507 [2024-12-12 20:30:28.628695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:44.507 [2024-12-12 20:30:28.628703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:44.507 [2024-12-12 20:30:28.628712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:44.507 [2024-12-12 20:30:28.628719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:44.507 [2024-12-12 20:30:28.628728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:44.507 [2024-12-12 20:30:28.628735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:44.507 [2024-12-12 20:30:28.628746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:44.507 [2024-12-12 20:30:28.628753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:44.507 [2024-12-12 20:30:28.628762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:44.507 [2024-12-12 20:30:28.628769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:44.507 [2024-12-12 20:30:28.628779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:44.507 [2024-12-12 20:30:28.628787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:44.507 [2024-12-12 20:30:28.628795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:44.507 [2024-12-12 20:30:28.628802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:44.507 [2024-12-12 20:30:28.628813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:44.507 [2024-12-12 20:30:28.628820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:44.507 [2024-12-12 20:30:28.628829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:44.507 [2024-12-12 20:30:28.628837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:44.507 [2024-12-12 20:30:28.628845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:44.507 [2024-12-12 20:30:28.628852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:44.507 [2024-12-12 20:30:28.628862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:44.507 [2024-12-12 20:30:28.628869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:44.507 [2024-12-12 20:30:28.628881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:44.507 [2024-12-12 20:30:28.628889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:44.507 [2024-12-12 20:30:28.628899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:44.507 [2024-12-12 20:30:28.628907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:44.507 [2024-12-12 20:30:28.628916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:44.507 [2024-12-12 20:30:28.628923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:44.507 [2024-12-12 20:30:28.628932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:44.507 [2024-12-12 20:30:28.628949] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:44.507 [2024-12-12 20:30:28.628959] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 469f2038-b3df-41ff-9328-07a984288948 00:19:44.507 [2024-12-12 20:30:28.628970] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:44.507 [2024-12-12 20:30:28.628979] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:44.507 [2024-12-12 20:30:28.628986] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:44.507 [2024-12-12 20:30:28.628996] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:44.507 [2024-12-12 20:30:28.629003] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:44.507 [2024-12-12 20:30:28.629013] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:44.507 [2024-12-12 20:30:28.629020] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:44.507 [2024-12-12 20:30:28.629030] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:44.507 [2024-12-12 20:30:28.629037] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:44.507 [2024-12-12 20:30:28.629045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.507 [2024-12-12 20:30:28.629053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:44.507 [2024-12-12 20:30:28.629063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.126 ms 00:19:44.507 [2024-12-12 20:30:28.629069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.507 [2024-12-12 20:30:28.641980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.507 [2024-12-12 20:30:28.642010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:44.507 [2024-12-12 20:30:28.642023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.873 ms 00:19:44.507 [2024-12-12 20:30:28.642032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.507 [2024-12-12 20:30:28.642407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.507 [2024-12-12 20:30:28.642437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:44.507 [2024-12-12 20:30:28.642449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.339 ms 00:19:44.507 [2024-12-12 20:30:28.642458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.507 [2024-12-12 20:30:28.679129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:44.507 [2024-12-12 20:30:28.679178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:44.507 [2024-12-12 20:30:28.679196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:44.507 [2024-12-12 20:30:28.679205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.507 [2024-12-12 20:30:28.679284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:44.507 [2024-12-12 20:30:28.679293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:44.507 [2024-12-12 20:30:28.679303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:44.507 [2024-12-12 20:30:28.679311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.507 [2024-12-12 20:30:28.679409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:44.507 [2024-12-12 20:30:28.679438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:44.507 [2024-12-12 20:30:28.679449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:44.507 [2024-12-12 20:30:28.679458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.507 [2024-12-12 20:30:28.679477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:44.507 [2024-12-12 20:30:28.679485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:44.507 [2024-12-12 20:30:28.679495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:44.507 [2024-12-12 20:30:28.679502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.765 [2024-12-12 20:30:28.759757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:44.765 [2024-12-12 20:30:28.759830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:44.765 [2024-12-12 20:30:28.759849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:44.765 [2024-12-12 20:30:28.759858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.765 [2024-12-12 20:30:28.824918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:44.765 [2024-12-12 20:30:28.824973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:44.765 [2024-12-12 20:30:28.824986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:44.765 [2024-12-12 20:30:28.824994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.765 [2024-12-12 20:30:28.825128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:44.765 [2024-12-12 20:30:28.825140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:44.765 [2024-12-12 20:30:28.825150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:44.765 [2024-12-12 20:30:28.825157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.765 [2024-12-12 20:30:28.825203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:44.765 [2024-12-12 20:30:28.825213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:44.765 [2024-12-12 20:30:28.825222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:44.765 [2024-12-12 20:30:28.825230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.765 [2024-12-12 20:30:28.825325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:44.766 [2024-12-12 20:30:28.825337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:44.766 [2024-12-12 20:30:28.825350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:44.766 [2024-12-12 20:30:28.825358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.766 [2024-12-12 20:30:28.825389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:44.766 [2024-12-12 20:30:28.825399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:44.766 [2024-12-12 20:30:28.825409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:44.766 [2024-12-12 20:30:28.825440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.766 [2024-12-12 20:30:28.825481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:44.766 [2024-12-12 20:30:28.825492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:44.766 [2024-12-12 20:30:28.825503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:44.766 [2024-12-12 20:30:28.825517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.766 [2024-12-12 20:30:28.825579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:44.766 [2024-12-12 20:30:28.825590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:44.766 [2024-12-12 20:30:28.825599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:44.766 [2024-12-12 20:30:28.825607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.766 [2024-12-12 20:30:28.825741] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 483.412 ms, result 0 00:19:44.766 true 00:19:44.766 20:30:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 77746 00:19:44.766 20:30:28 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 77746 ']' 00:19:44.766 20:30:28 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 77746 00:19:44.766 20:30:28 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:19:44.766 20:30:28 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:44.766 20:30:28 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77746 00:19:44.766 killing process with pid 77746 00:19:44.766 Received shutdown signal, test time was about 4.000000 seconds 00:19:44.766 00:19:44.766 Latency(us) 00:19:44.766 [2024-12-12T20:30:28.994Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:44.766 [2024-12-12T20:30:28.994Z] =================================================================================================================== 00:19:44.766 [2024-12-12T20:30:28.994Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:44.766 20:30:28 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:44.766 20:30:28 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:44.766 20:30:28 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77746' 00:19:44.766 20:30:28 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 77746 00:19:44.766 20:30:28 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 77746 00:19:50.030 Remove shared memory files 00:19:50.030 20:30:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:50.030 20:30:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:19:50.030 20:30:34 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:19:50.030 20:30:34 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:19:50.030 20:30:34 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:19:50.030 20:30:34 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:19:50.030 20:30:34 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:19:50.030 20:30:34 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:19:50.030 ************************************ 00:19:50.030 END TEST ftl_bdevperf 00:19:50.030 ************************************ 00:19:50.030 00:19:50.030 real 0m24.876s 00:19:50.030 user 0m27.493s 00:19:50.030 sys 0m0.844s 00:19:50.030 20:30:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:50.030 20:30:34 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:50.030 20:30:34 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:19:50.030 20:30:34 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:50.030 20:30:34 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:50.030 20:30:34 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:50.030 ************************************ 00:19:50.030 START TEST ftl_trim 00:19:50.030 ************************************ 00:19:50.030 20:30:34 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:19:50.291 * Looking for test storage... 00:19:50.291 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:50.291 20:30:34 ftl.ftl_trim -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:50.291 20:30:34 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:50.291 20:30:34 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lcov --version 00:19:50.291 20:30:34 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:50.291 20:30:34 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:50.291 20:30:34 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:50.291 20:30:34 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:50.291 20:30:34 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:19:50.291 20:30:34 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:19:50.291 20:30:34 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:19:50.291 20:30:34 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:19:50.291 20:30:34 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:19:50.291 20:30:34 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:19:50.291 20:30:34 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:19:50.291 20:30:34 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:50.291 20:30:34 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:19:50.291 20:30:34 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:19:50.291 20:30:34 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:50.291 20:30:34 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:50.291 20:30:34 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:19:50.291 20:30:34 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:19:50.291 20:30:34 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:50.291 20:30:34 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:19:50.291 20:30:34 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:19:50.291 20:30:34 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:19:50.291 20:30:34 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:19:50.291 20:30:34 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:50.291 20:30:34 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:19:50.291 20:30:34 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:19:50.291 20:30:34 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:50.291 20:30:34 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:50.291 20:30:34 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:19:50.291 20:30:34 ftl.ftl_trim -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:50.291 20:30:34 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:50.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:50.291 --rc genhtml_branch_coverage=1 00:19:50.291 --rc genhtml_function_coverage=1 00:19:50.291 --rc genhtml_legend=1 00:19:50.291 --rc geninfo_all_blocks=1 00:19:50.291 --rc geninfo_unexecuted_blocks=1 00:19:50.291 00:19:50.291 ' 00:19:50.291 20:30:34 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:50.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:50.291 --rc genhtml_branch_coverage=1 00:19:50.291 --rc genhtml_function_coverage=1 00:19:50.291 --rc genhtml_legend=1 00:19:50.291 --rc geninfo_all_blocks=1 00:19:50.291 --rc geninfo_unexecuted_blocks=1 00:19:50.291 00:19:50.291 ' 00:19:50.291 20:30:34 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:50.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:50.291 --rc genhtml_branch_coverage=1 00:19:50.291 --rc genhtml_function_coverage=1 00:19:50.291 --rc genhtml_legend=1 00:19:50.291 --rc geninfo_all_blocks=1 00:19:50.291 --rc geninfo_unexecuted_blocks=1 00:19:50.291 00:19:50.291 ' 00:19:50.291 20:30:34 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:50.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:50.291 --rc genhtml_branch_coverage=1 00:19:50.291 --rc genhtml_function_coverage=1 00:19:50.291 --rc genhtml_legend=1 00:19:50.291 --rc geninfo_all_blocks=1 00:19:50.291 --rc geninfo_unexecuted_blocks=1 00:19:50.291 00:19:50.291 ' 00:19:50.291 20:30:34 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:50.291 20:30:34 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:19:50.291 20:30:34 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:50.291 20:30:34 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:50.291 20:30:34 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:50.291 20:30:34 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:50.291 20:30:34 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:50.291 20:30:34 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:50.291 20:30:34 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:50.291 20:30:34 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:50.291 20:30:34 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:50.291 20:30:34 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:50.291 20:30:34 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:50.291 20:30:34 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:50.291 20:30:34 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:50.291 20:30:34 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:50.291 20:30:34 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:50.291 20:30:34 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:50.291 20:30:34 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:50.291 20:30:34 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:50.291 20:30:34 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:50.291 20:30:34 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:50.291 20:30:34 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:50.291 20:30:34 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:50.291 20:30:34 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:50.291 20:30:34 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:50.291 20:30:34 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:50.291 20:30:34 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:50.291 20:30:34 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:50.291 20:30:34 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:50.292 20:30:34 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:19:50.292 20:30:34 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:19:50.292 20:30:34 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:19:50.292 20:30:34 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:19:50.292 20:30:34 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:19:50.292 20:30:34 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:19:50.292 20:30:34 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:19:50.292 20:30:34 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:19:50.292 20:30:34 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:50.292 20:30:34 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:50.292 20:30:34 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:19:50.292 20:30:34 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=78077 00:19:50.292 20:30:34 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 78077 00:19:50.292 20:30:34 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:19:50.292 20:30:34 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78077 ']' 00:19:50.292 20:30:34 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:50.292 20:30:34 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:50.292 20:30:34 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:50.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:50.292 20:30:34 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:50.292 20:30:34 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:50.292 [2024-12-12 20:30:34.456198] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:19:50.292 [2024-12-12 20:30:34.457027] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78077 ] 00:19:50.552 [2024-12-12 20:30:34.616032] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:50.552 [2024-12-12 20:30:34.718842] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:19:50.552 [2024-12-12 20:30:34.719122] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:50.552 [2024-12-12 20:30:34.719144] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:19:51.119 20:30:35 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:51.119 20:30:35 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:19:51.119 20:30:35 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:51.119 20:30:35 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:19:51.119 20:30:35 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:51.119 20:30:35 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:19:51.119 20:30:35 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:19:51.119 20:30:35 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:51.377 20:30:35 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:51.377 20:30:35 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:19:51.377 20:30:35 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:51.377 20:30:35 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:19:51.377 20:30:35 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:51.377 20:30:35 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:51.377 20:30:35 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:51.377 20:30:35 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:51.635 20:30:35 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:51.635 { 00:19:51.635 "name": "nvme0n1", 00:19:51.635 "aliases": [ 00:19:51.635 "05e69156-6166-49f3-9986-904c2977f97b" 00:19:51.635 ], 00:19:51.635 "product_name": "NVMe disk", 00:19:51.635 "block_size": 4096, 00:19:51.635 "num_blocks": 1310720, 00:19:51.635 "uuid": "05e69156-6166-49f3-9986-904c2977f97b", 00:19:51.635 "numa_id": -1, 00:19:51.635 "assigned_rate_limits": { 00:19:51.635 "rw_ios_per_sec": 0, 00:19:51.635 "rw_mbytes_per_sec": 0, 00:19:51.635 "r_mbytes_per_sec": 0, 00:19:51.635 "w_mbytes_per_sec": 0 00:19:51.635 }, 00:19:51.635 "claimed": true, 00:19:51.635 "claim_type": "read_many_write_one", 00:19:51.635 "zoned": false, 00:19:51.635 "supported_io_types": { 00:19:51.635 "read": true, 00:19:51.635 "write": true, 00:19:51.635 "unmap": true, 00:19:51.635 "flush": true, 00:19:51.635 "reset": true, 00:19:51.636 "nvme_admin": true, 00:19:51.636 "nvme_io": true, 00:19:51.636 "nvme_io_md": false, 00:19:51.636 "write_zeroes": true, 00:19:51.636 "zcopy": false, 00:19:51.636 "get_zone_info": false, 00:19:51.636 "zone_management": false, 00:19:51.636 "zone_append": false, 00:19:51.636 "compare": true, 00:19:51.636 "compare_and_write": false, 00:19:51.636 "abort": true, 00:19:51.636 "seek_hole": false, 00:19:51.636 "seek_data": false, 00:19:51.636 "copy": true, 00:19:51.636 "nvme_iov_md": false 00:19:51.636 }, 00:19:51.636 "driver_specific": { 00:19:51.636 "nvme": [ 00:19:51.636 { 00:19:51.636 "pci_address": "0000:00:11.0", 00:19:51.636 "trid": { 00:19:51.636 "trtype": "PCIe", 00:19:51.636 "traddr": "0000:00:11.0" 00:19:51.636 }, 00:19:51.636 "ctrlr_data": { 00:19:51.636 "cntlid": 0, 00:19:51.636 "vendor_id": "0x1b36", 00:19:51.636 "model_number": "QEMU NVMe Ctrl", 00:19:51.636 "serial_number": "12341", 00:19:51.636 "firmware_revision": "8.0.0", 00:19:51.636 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:51.636 "oacs": { 00:19:51.636 "security": 0, 00:19:51.636 "format": 1, 00:19:51.636 "firmware": 0, 00:19:51.636 "ns_manage": 1 00:19:51.636 }, 00:19:51.636 "multi_ctrlr": false, 00:19:51.636 "ana_reporting": false 00:19:51.636 }, 00:19:51.636 "vs": { 00:19:51.636 "nvme_version": "1.4" 00:19:51.636 }, 00:19:51.636 "ns_data": { 00:19:51.636 "id": 1, 00:19:51.636 "can_share": false 00:19:51.636 } 00:19:51.636 } 00:19:51.636 ], 00:19:51.636 "mp_policy": "active_passive" 00:19:51.636 } 00:19:51.636 } 00:19:51.636 ]' 00:19:51.636 20:30:35 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:51.636 20:30:35 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:51.636 20:30:35 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:51.636 20:30:35 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:19:51.636 20:30:35 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:19:51.636 20:30:35 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:19:51.636 20:30:35 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:19:51.636 20:30:35 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:51.636 20:30:35 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:19:51.636 20:30:35 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:51.636 20:30:35 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:51.894 20:30:36 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=559f68f7-9907-4148-a276-545878c8f3cb 00:19:51.894 20:30:36 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:19:51.894 20:30:36 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 559f68f7-9907-4148-a276-545878c8f3cb 00:19:52.152 20:30:36 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:52.409 20:30:36 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=a3bca995-15b7-4faf-9bf2-5819fa85d706 00:19:52.409 20:30:36 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u a3bca995-15b7-4faf-9bf2-5819fa85d706 00:19:52.667 20:30:36 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=d7507384-5b75-42a4-b610-46b54e20fc46 00:19:52.667 20:30:36 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 d7507384-5b75-42a4-b610-46b54e20fc46 00:19:52.667 20:30:36 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:19:52.667 20:30:36 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:52.667 20:30:36 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=d7507384-5b75-42a4-b610-46b54e20fc46 00:19:52.667 20:30:36 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:19:52.667 20:30:36 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size d7507384-5b75-42a4-b610-46b54e20fc46 00:19:52.667 20:30:36 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=d7507384-5b75-42a4-b610-46b54e20fc46 00:19:52.667 20:30:36 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:52.667 20:30:36 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:52.667 20:30:36 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:52.667 20:30:36 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d7507384-5b75-42a4-b610-46b54e20fc46 00:19:52.667 20:30:36 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:52.667 { 00:19:52.667 "name": "d7507384-5b75-42a4-b610-46b54e20fc46", 00:19:52.667 "aliases": [ 00:19:52.667 "lvs/nvme0n1p0" 00:19:52.667 ], 00:19:52.667 "product_name": "Logical Volume", 00:19:52.667 "block_size": 4096, 00:19:52.667 "num_blocks": 26476544, 00:19:52.667 "uuid": "d7507384-5b75-42a4-b610-46b54e20fc46", 00:19:52.667 "assigned_rate_limits": { 00:19:52.667 "rw_ios_per_sec": 0, 00:19:52.667 "rw_mbytes_per_sec": 0, 00:19:52.667 "r_mbytes_per_sec": 0, 00:19:52.667 "w_mbytes_per_sec": 0 00:19:52.667 }, 00:19:52.667 "claimed": false, 00:19:52.667 "zoned": false, 00:19:52.667 "supported_io_types": { 00:19:52.667 "read": true, 00:19:52.667 "write": true, 00:19:52.667 "unmap": true, 00:19:52.667 "flush": false, 00:19:52.667 "reset": true, 00:19:52.667 "nvme_admin": false, 00:19:52.667 "nvme_io": false, 00:19:52.667 "nvme_io_md": false, 00:19:52.667 "write_zeroes": true, 00:19:52.667 "zcopy": false, 00:19:52.667 "get_zone_info": false, 00:19:52.667 "zone_management": false, 00:19:52.667 "zone_append": false, 00:19:52.667 "compare": false, 00:19:52.667 "compare_and_write": false, 00:19:52.667 "abort": false, 00:19:52.667 "seek_hole": true, 00:19:52.667 "seek_data": true, 00:19:52.667 "copy": false, 00:19:52.667 "nvme_iov_md": false 00:19:52.667 }, 00:19:52.667 "driver_specific": { 00:19:52.667 "lvol": { 00:19:52.667 "lvol_store_uuid": "a3bca995-15b7-4faf-9bf2-5819fa85d706", 00:19:52.667 "base_bdev": "nvme0n1", 00:19:52.667 "thin_provision": true, 00:19:52.667 "num_allocated_clusters": 0, 00:19:52.667 "snapshot": false, 00:19:52.667 "clone": false, 00:19:52.667 "esnap_clone": false 00:19:52.667 } 00:19:52.667 } 00:19:52.667 } 00:19:52.667 ]' 00:19:52.667 20:30:36 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:52.925 20:30:36 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:52.925 20:30:36 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:52.925 20:30:36 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:52.925 20:30:36 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:52.925 20:30:36 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:19:52.925 20:30:36 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:19:52.925 20:30:36 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:19:52.925 20:30:36 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:53.183 20:30:37 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:53.183 20:30:37 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:53.183 20:30:37 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size d7507384-5b75-42a4-b610-46b54e20fc46 00:19:53.183 20:30:37 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=d7507384-5b75-42a4-b610-46b54e20fc46 00:19:53.183 20:30:37 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:53.183 20:30:37 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:53.183 20:30:37 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:53.183 20:30:37 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d7507384-5b75-42a4-b610-46b54e20fc46 00:19:53.183 20:30:37 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:53.183 { 00:19:53.183 "name": "d7507384-5b75-42a4-b610-46b54e20fc46", 00:19:53.183 "aliases": [ 00:19:53.183 "lvs/nvme0n1p0" 00:19:53.183 ], 00:19:53.183 "product_name": "Logical Volume", 00:19:53.183 "block_size": 4096, 00:19:53.183 "num_blocks": 26476544, 00:19:53.183 "uuid": "d7507384-5b75-42a4-b610-46b54e20fc46", 00:19:53.183 "assigned_rate_limits": { 00:19:53.183 "rw_ios_per_sec": 0, 00:19:53.183 "rw_mbytes_per_sec": 0, 00:19:53.183 "r_mbytes_per_sec": 0, 00:19:53.183 "w_mbytes_per_sec": 0 00:19:53.183 }, 00:19:53.183 "claimed": false, 00:19:53.183 "zoned": false, 00:19:53.183 "supported_io_types": { 00:19:53.183 "read": true, 00:19:53.183 "write": true, 00:19:53.183 "unmap": true, 00:19:53.183 "flush": false, 00:19:53.183 "reset": true, 00:19:53.183 "nvme_admin": false, 00:19:53.183 "nvme_io": false, 00:19:53.183 "nvme_io_md": false, 00:19:53.183 "write_zeroes": true, 00:19:53.183 "zcopy": false, 00:19:53.183 "get_zone_info": false, 00:19:53.183 "zone_management": false, 00:19:53.183 "zone_append": false, 00:19:53.183 "compare": false, 00:19:53.183 "compare_and_write": false, 00:19:53.183 "abort": false, 00:19:53.183 "seek_hole": true, 00:19:53.183 "seek_data": true, 00:19:53.183 "copy": false, 00:19:53.183 "nvme_iov_md": false 00:19:53.183 }, 00:19:53.183 "driver_specific": { 00:19:53.183 "lvol": { 00:19:53.183 "lvol_store_uuid": "a3bca995-15b7-4faf-9bf2-5819fa85d706", 00:19:53.183 "base_bdev": "nvme0n1", 00:19:53.183 "thin_provision": true, 00:19:53.183 "num_allocated_clusters": 0, 00:19:53.183 "snapshot": false, 00:19:53.183 "clone": false, 00:19:53.183 "esnap_clone": false 00:19:53.183 } 00:19:53.183 } 00:19:53.183 } 00:19:53.183 ]' 00:19:53.183 20:30:37 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:53.441 20:30:37 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:53.441 20:30:37 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:53.441 20:30:37 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:53.441 20:30:37 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:53.441 20:30:37 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:19:53.441 20:30:37 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:19:53.441 20:30:37 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:53.700 20:30:37 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:19:53.700 20:30:37 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:19:53.700 20:30:37 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size d7507384-5b75-42a4-b610-46b54e20fc46 00:19:53.700 20:30:37 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=d7507384-5b75-42a4-b610-46b54e20fc46 00:19:53.700 20:30:37 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:53.700 20:30:37 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:53.700 20:30:37 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:53.700 20:30:37 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d7507384-5b75-42a4-b610-46b54e20fc46 00:19:53.700 20:30:37 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:53.700 { 00:19:53.700 "name": "d7507384-5b75-42a4-b610-46b54e20fc46", 00:19:53.700 "aliases": [ 00:19:53.700 "lvs/nvme0n1p0" 00:19:53.700 ], 00:19:53.700 "product_name": "Logical Volume", 00:19:53.700 "block_size": 4096, 00:19:53.700 "num_blocks": 26476544, 00:19:53.700 "uuid": "d7507384-5b75-42a4-b610-46b54e20fc46", 00:19:53.700 "assigned_rate_limits": { 00:19:53.700 "rw_ios_per_sec": 0, 00:19:53.700 "rw_mbytes_per_sec": 0, 00:19:53.700 "r_mbytes_per_sec": 0, 00:19:53.700 "w_mbytes_per_sec": 0 00:19:53.700 }, 00:19:53.700 "claimed": false, 00:19:53.700 "zoned": false, 00:19:53.700 "supported_io_types": { 00:19:53.700 "read": true, 00:19:53.700 "write": true, 00:19:53.700 "unmap": true, 00:19:53.700 "flush": false, 00:19:53.700 "reset": true, 00:19:53.700 "nvme_admin": false, 00:19:53.700 "nvme_io": false, 00:19:53.700 "nvme_io_md": false, 00:19:53.700 "write_zeroes": true, 00:19:53.700 "zcopy": false, 00:19:53.700 "get_zone_info": false, 00:19:53.700 "zone_management": false, 00:19:53.700 "zone_append": false, 00:19:53.700 "compare": false, 00:19:53.700 "compare_and_write": false, 00:19:53.700 "abort": false, 00:19:53.700 "seek_hole": true, 00:19:53.700 "seek_data": true, 00:19:53.700 "copy": false, 00:19:53.700 "nvme_iov_md": false 00:19:53.700 }, 00:19:53.700 "driver_specific": { 00:19:53.700 "lvol": { 00:19:53.700 "lvol_store_uuid": "a3bca995-15b7-4faf-9bf2-5819fa85d706", 00:19:53.700 "base_bdev": "nvme0n1", 00:19:53.700 "thin_provision": true, 00:19:53.700 "num_allocated_clusters": 0, 00:19:53.700 "snapshot": false, 00:19:53.700 "clone": false, 00:19:53.700 "esnap_clone": false 00:19:53.700 } 00:19:53.700 } 00:19:53.700 } 00:19:53.700 ]' 00:19:53.700 20:30:37 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:53.959 20:30:37 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:53.959 20:30:37 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:53.959 20:30:37 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:53.959 20:30:37 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:53.959 20:30:37 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:19:53.959 20:30:37 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:19:53.959 20:30:37 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d d7507384-5b75-42a4-b610-46b54e20fc46 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:19:53.959 [2024-12-12 20:30:38.142742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.959 [2024-12-12 20:30:38.142782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:53.959 [2024-12-12 20:30:38.142796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:53.959 [2024-12-12 20:30:38.142803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.959 [2024-12-12 20:30:38.145084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.959 [2024-12-12 20:30:38.145113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:53.959 [2024-12-12 20:30:38.145122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.262 ms 00:19:53.959 [2024-12-12 20:30:38.145128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.959 [2024-12-12 20:30:38.145203] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:53.959 [2024-12-12 20:30:38.145805] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:53.959 [2024-12-12 20:30:38.145823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.959 [2024-12-12 20:30:38.145830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:53.959 [2024-12-12 20:30:38.145838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.627 ms 00:19:53.959 [2024-12-12 20:30:38.145844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.959 [2024-12-12 20:30:38.145928] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 6ce9d3d9-3dbe-48b1-9531-60bd0b669e2c 00:19:53.959 [2024-12-12 20:30:38.146864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.959 [2024-12-12 20:30:38.146889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:53.959 [2024-12-12 20:30:38.146897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:19:53.959 [2024-12-12 20:30:38.146904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.959 [2024-12-12 20:30:38.151767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.959 [2024-12-12 20:30:38.151794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:53.959 [2024-12-12 20:30:38.151802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.803 ms 00:19:53.959 [2024-12-12 20:30:38.151809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.959 [2024-12-12 20:30:38.151920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.959 [2024-12-12 20:30:38.151930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:53.959 [2024-12-12 20:30:38.151936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:19:53.959 [2024-12-12 20:30:38.151945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.959 [2024-12-12 20:30:38.151971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.959 [2024-12-12 20:30:38.151979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:53.959 [2024-12-12 20:30:38.151986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:53.959 [2024-12-12 20:30:38.151994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.959 [2024-12-12 20:30:38.152019] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:53.959 [2024-12-12 20:30:38.154901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.959 [2024-12-12 20:30:38.154927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:53.959 [2024-12-12 20:30:38.154936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.884 ms 00:19:53.959 [2024-12-12 20:30:38.154943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.959 [2024-12-12 20:30:38.154976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.959 [2024-12-12 20:30:38.154992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:53.959 [2024-12-12 20:30:38.155000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:53.959 [2024-12-12 20:30:38.155005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.959 [2024-12-12 20:30:38.155028] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:53.959 [2024-12-12 20:30:38.155136] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:53.959 [2024-12-12 20:30:38.155147] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:53.959 [2024-12-12 20:30:38.155156] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:53.959 [2024-12-12 20:30:38.155165] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:53.959 [2024-12-12 20:30:38.155171] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:53.959 [2024-12-12 20:30:38.155179] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:53.959 [2024-12-12 20:30:38.155184] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:53.959 [2024-12-12 20:30:38.155192] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:53.960 [2024-12-12 20:30:38.155198] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:53.960 [2024-12-12 20:30:38.155205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.960 [2024-12-12 20:30:38.155211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:53.960 [2024-12-12 20:30:38.155218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.178 ms 00:19:53.960 [2024-12-12 20:30:38.155224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.960 [2024-12-12 20:30:38.155298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.960 [2024-12-12 20:30:38.155304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:53.960 [2024-12-12 20:30:38.155311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:19:53.960 [2024-12-12 20:30:38.155316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.960 [2024-12-12 20:30:38.155427] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:53.960 [2024-12-12 20:30:38.155435] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:53.960 [2024-12-12 20:30:38.155443] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:53.960 [2024-12-12 20:30:38.155449] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:53.960 [2024-12-12 20:30:38.155456] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:53.960 [2024-12-12 20:30:38.155461] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:53.960 [2024-12-12 20:30:38.155468] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:53.960 [2024-12-12 20:30:38.155473] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:53.960 [2024-12-12 20:30:38.155480] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:53.960 [2024-12-12 20:30:38.155484] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:53.960 [2024-12-12 20:30:38.155492] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:53.960 [2024-12-12 20:30:38.155498] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:53.960 [2024-12-12 20:30:38.155504] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:53.960 [2024-12-12 20:30:38.155509] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:53.960 [2024-12-12 20:30:38.155516] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:53.960 [2024-12-12 20:30:38.155523] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:53.960 [2024-12-12 20:30:38.155530] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:53.960 [2024-12-12 20:30:38.155535] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:53.960 [2024-12-12 20:30:38.155541] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:53.960 [2024-12-12 20:30:38.155547] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:53.960 [2024-12-12 20:30:38.155553] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:53.960 [2024-12-12 20:30:38.155558] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:53.960 [2024-12-12 20:30:38.155564] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:53.960 [2024-12-12 20:30:38.155569] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:53.960 [2024-12-12 20:30:38.155576] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:53.960 [2024-12-12 20:30:38.155581] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:53.960 [2024-12-12 20:30:38.155587] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:53.960 [2024-12-12 20:30:38.155592] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:53.960 [2024-12-12 20:30:38.155598] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:53.960 [2024-12-12 20:30:38.155604] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:53.960 [2024-12-12 20:30:38.155610] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:53.960 [2024-12-12 20:30:38.155616] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:53.960 [2024-12-12 20:30:38.155623] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:53.960 [2024-12-12 20:30:38.155628] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:53.960 [2024-12-12 20:30:38.155634] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:53.960 [2024-12-12 20:30:38.155639] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:53.960 [2024-12-12 20:30:38.155646] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:53.960 [2024-12-12 20:30:38.155651] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:53.960 [2024-12-12 20:30:38.155657] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:53.960 [2024-12-12 20:30:38.155662] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:53.960 [2024-12-12 20:30:38.155668] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:53.960 [2024-12-12 20:30:38.155673] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:53.960 [2024-12-12 20:30:38.155679] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:53.960 [2024-12-12 20:30:38.155684] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:53.960 [2024-12-12 20:30:38.155691] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:53.960 [2024-12-12 20:30:38.155696] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:53.960 [2024-12-12 20:30:38.155703] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:53.960 [2024-12-12 20:30:38.155710] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:53.960 [2024-12-12 20:30:38.155717] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:53.960 [2024-12-12 20:30:38.155722] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:53.960 [2024-12-12 20:30:38.155729] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:53.960 [2024-12-12 20:30:38.155734] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:53.960 [2024-12-12 20:30:38.155740] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:53.960 [2024-12-12 20:30:38.155746] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:53.960 [2024-12-12 20:30:38.155755] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:53.960 [2024-12-12 20:30:38.155766] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:53.960 [2024-12-12 20:30:38.155773] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:53.960 [2024-12-12 20:30:38.155779] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:53.960 [2024-12-12 20:30:38.155785] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:53.960 [2024-12-12 20:30:38.155791] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:53.960 [2024-12-12 20:30:38.155797] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:53.960 [2024-12-12 20:30:38.155803] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:53.960 [2024-12-12 20:30:38.155811] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:53.960 [2024-12-12 20:30:38.155819] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:53.960 [2024-12-12 20:30:38.155827] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:53.960 [2024-12-12 20:30:38.155832] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:53.960 [2024-12-12 20:30:38.155839] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:53.960 [2024-12-12 20:30:38.155845] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:53.960 [2024-12-12 20:30:38.155851] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:53.960 [2024-12-12 20:30:38.155857] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:53.960 [2024-12-12 20:30:38.155866] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:53.960 [2024-12-12 20:30:38.155872] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:53.960 [2024-12-12 20:30:38.155878] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:53.960 [2024-12-12 20:30:38.155884] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:53.960 [2024-12-12 20:30:38.155890] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:53.960 [2024-12-12 20:30:38.155896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.960 [2024-12-12 20:30:38.155903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:53.960 [2024-12-12 20:30:38.155909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.536 ms 00:19:53.960 [2024-12-12 20:30:38.155916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.960 [2024-12-12 20:30:38.155992] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:53.960 [2024-12-12 20:30:38.156034] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:56.490 [2024-12-12 20:30:40.179957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.490 [2024-12-12 20:30:40.180161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:56.490 [2024-12-12 20:30:40.180235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2023.822 ms 00:19:56.490 [2024-12-12 20:30:40.180263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.490 [2024-12-12 20:30:40.205722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.490 [2024-12-12 20:30:40.205910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:56.490 [2024-12-12 20:30:40.205979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.950 ms 00:19:56.490 [2024-12-12 20:30:40.206005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.490 [2024-12-12 20:30:40.206149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.490 [2024-12-12 20:30:40.206250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:56.490 [2024-12-12 20:30:40.206301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:19:56.490 [2024-12-12 20:30:40.206326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.490 [2024-12-12 20:30:40.247559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.490 [2024-12-12 20:30:40.247722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:56.490 [2024-12-12 20:30:40.247789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.188 ms 00:19:56.491 [2024-12-12 20:30:40.247817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.491 [2024-12-12 20:30:40.248214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.491 [2024-12-12 20:30:40.248314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:56.491 [2024-12-12 20:30:40.248377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:56.491 [2024-12-12 20:30:40.248402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.491 [2024-12-12 20:30:40.248900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.491 [2024-12-12 20:30:40.248996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:56.491 [2024-12-12 20:30:40.249047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.311 ms 00:19:56.491 [2024-12-12 20:30:40.249071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.491 [2024-12-12 20:30:40.249246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.491 [2024-12-12 20:30:40.249300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:56.491 [2024-12-12 20:30:40.249358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:19:56.491 [2024-12-12 20:30:40.249383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.491 [2024-12-12 20:30:40.263534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.491 [2024-12-12 20:30:40.263637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:56.491 [2024-12-12 20:30:40.263689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.104 ms 00:19:56.491 [2024-12-12 20:30:40.263713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.491 [2024-12-12 20:30:40.276108] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:56.491 [2024-12-12 20:30:40.290073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.491 [2024-12-12 20:30:40.290176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:56.491 [2024-12-12 20:30:40.290226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.235 ms 00:19:56.491 [2024-12-12 20:30:40.290247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.491 [2024-12-12 20:30:40.355243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.491 [2024-12-12 20:30:40.355389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:56.491 [2024-12-12 20:30:40.355485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.911 ms 00:19:56.491 [2024-12-12 20:30:40.355956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.491 [2024-12-12 20:30:40.356240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.491 [2024-12-12 20:30:40.356322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:56.491 [2024-12-12 20:30:40.356375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.153 ms 00:19:56.491 [2024-12-12 20:30:40.356398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.491 [2024-12-12 20:30:40.379733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.491 [2024-12-12 20:30:40.379830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:56.491 [2024-12-12 20:30:40.379934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.275 ms 00:19:56.491 [2024-12-12 20:30:40.379965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.491 [2024-12-12 20:30:40.402690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.491 [2024-12-12 20:30:40.402791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:56.491 [2024-12-12 20:30:40.402853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.652 ms 00:19:56.491 [2024-12-12 20:30:40.402873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.491 [2024-12-12 20:30:40.403502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.491 [2024-12-12 20:30:40.403586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:56.491 [2024-12-12 20:30:40.403636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.531 ms 00:19:56.491 [2024-12-12 20:30:40.403657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.491 [2024-12-12 20:30:40.472941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.491 [2024-12-12 20:30:40.473078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:56.491 [2024-12-12 20:30:40.473133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.232 ms 00:19:56.491 [2024-12-12 20:30:40.473156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.491 [2024-12-12 20:30:40.496896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.491 [2024-12-12 20:30:40.496999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:56.491 [2024-12-12 20:30:40.497049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.596 ms 00:19:56.491 [2024-12-12 20:30:40.497070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.491 [2024-12-12 20:30:40.519847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.491 [2024-12-12 20:30:40.519945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:56.491 [2024-12-12 20:30:40.519962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.693 ms 00:19:56.491 [2024-12-12 20:30:40.519969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.491 [2024-12-12 20:30:40.542773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.491 [2024-12-12 20:30:40.542819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:56.491 [2024-12-12 20:30:40.542832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.737 ms 00:19:56.491 [2024-12-12 20:30:40.542839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.491 [2024-12-12 20:30:40.542898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.491 [2024-12-12 20:30:40.542908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:56.491 [2024-12-12 20:30:40.542921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:56.491 [2024-12-12 20:30:40.542928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.491 [2024-12-12 20:30:40.543001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.491 [2024-12-12 20:30:40.543010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:56.491 [2024-12-12 20:30:40.543019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:19:56.491 [2024-12-12 20:30:40.543027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.491 [2024-12-12 20:30:40.543785] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:56.491 [2024-12-12 20:30:40.546621] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2400.613 ms, result 0 00:19:56.491 [2024-12-12 20:30:40.547243] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:56.491 { 00:19:56.491 "name": "ftl0", 00:19:56.491 "uuid": "6ce9d3d9-3dbe-48b1-9531-60bd0b669e2c" 00:19:56.491 } 00:19:56.491 20:30:40 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:19:56.491 20:30:40 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:19:56.491 20:30:40 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:56.491 20:30:40 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:19:56.491 20:30:40 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:56.491 20:30:40 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:56.491 20:30:40 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:19:56.749 20:30:40 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:19:56.749 [ 00:19:56.749 { 00:19:56.749 "name": "ftl0", 00:19:56.749 "aliases": [ 00:19:56.749 "6ce9d3d9-3dbe-48b1-9531-60bd0b669e2c" 00:19:56.749 ], 00:19:56.749 "product_name": "FTL disk", 00:19:56.749 "block_size": 4096, 00:19:56.749 "num_blocks": 23592960, 00:19:56.749 "uuid": "6ce9d3d9-3dbe-48b1-9531-60bd0b669e2c", 00:19:56.749 "assigned_rate_limits": { 00:19:56.749 "rw_ios_per_sec": 0, 00:19:56.749 "rw_mbytes_per_sec": 0, 00:19:56.749 "r_mbytes_per_sec": 0, 00:19:56.749 "w_mbytes_per_sec": 0 00:19:56.749 }, 00:19:56.749 "claimed": false, 00:19:56.749 "zoned": false, 00:19:56.749 "supported_io_types": { 00:19:56.749 "read": true, 00:19:56.749 "write": true, 00:19:56.749 "unmap": true, 00:19:56.749 "flush": true, 00:19:56.749 "reset": false, 00:19:56.749 "nvme_admin": false, 00:19:56.749 "nvme_io": false, 00:19:56.749 "nvme_io_md": false, 00:19:56.749 "write_zeroes": true, 00:19:56.749 "zcopy": false, 00:19:56.749 "get_zone_info": false, 00:19:56.749 "zone_management": false, 00:19:56.749 "zone_append": false, 00:19:56.749 "compare": false, 00:19:56.749 "compare_and_write": false, 00:19:56.749 "abort": false, 00:19:56.749 "seek_hole": false, 00:19:56.750 "seek_data": false, 00:19:56.750 "copy": false, 00:19:56.750 "nvme_iov_md": false 00:19:56.750 }, 00:19:56.750 "driver_specific": { 00:19:56.750 "ftl": { 00:19:56.750 "base_bdev": "d7507384-5b75-42a4-b610-46b54e20fc46", 00:19:56.750 "cache": "nvc0n1p0" 00:19:56.750 } 00:19:56.750 } 00:19:56.750 } 00:19:56.750 ] 00:19:56.750 20:30:40 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:19:56.750 20:30:40 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:19:56.750 20:30:40 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:19:57.007 20:30:41 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:19:57.007 20:30:41 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:19:57.265 20:30:41 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:19:57.265 { 00:19:57.265 "name": "ftl0", 00:19:57.265 "aliases": [ 00:19:57.265 "6ce9d3d9-3dbe-48b1-9531-60bd0b669e2c" 00:19:57.265 ], 00:19:57.265 "product_name": "FTL disk", 00:19:57.265 "block_size": 4096, 00:19:57.265 "num_blocks": 23592960, 00:19:57.265 "uuid": "6ce9d3d9-3dbe-48b1-9531-60bd0b669e2c", 00:19:57.265 "assigned_rate_limits": { 00:19:57.265 "rw_ios_per_sec": 0, 00:19:57.265 "rw_mbytes_per_sec": 0, 00:19:57.265 "r_mbytes_per_sec": 0, 00:19:57.265 "w_mbytes_per_sec": 0 00:19:57.265 }, 00:19:57.265 "claimed": false, 00:19:57.265 "zoned": false, 00:19:57.265 "supported_io_types": { 00:19:57.265 "read": true, 00:19:57.265 "write": true, 00:19:57.265 "unmap": true, 00:19:57.265 "flush": true, 00:19:57.265 "reset": false, 00:19:57.265 "nvme_admin": false, 00:19:57.265 "nvme_io": false, 00:19:57.265 "nvme_io_md": false, 00:19:57.265 "write_zeroes": true, 00:19:57.265 "zcopy": false, 00:19:57.265 "get_zone_info": false, 00:19:57.265 "zone_management": false, 00:19:57.265 "zone_append": false, 00:19:57.265 "compare": false, 00:19:57.265 "compare_and_write": false, 00:19:57.265 "abort": false, 00:19:57.265 "seek_hole": false, 00:19:57.265 "seek_data": false, 00:19:57.265 "copy": false, 00:19:57.265 "nvme_iov_md": false 00:19:57.265 }, 00:19:57.265 "driver_specific": { 00:19:57.265 "ftl": { 00:19:57.265 "base_bdev": "d7507384-5b75-42a4-b610-46b54e20fc46", 00:19:57.265 "cache": "nvc0n1p0" 00:19:57.265 } 00:19:57.265 } 00:19:57.265 } 00:19:57.265 ]' 00:19:57.265 20:30:41 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:19:57.265 20:30:41 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:19:57.265 20:30:41 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:19:57.524 [2024-12-12 20:30:41.582358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.524 [2024-12-12 20:30:41.582530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:57.524 [2024-12-12 20:30:41.582553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:57.524 [2024-12-12 20:30:41.582563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.524 [2024-12-12 20:30:41.582598] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:57.524 [2024-12-12 20:30:41.585188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.524 [2024-12-12 20:30:41.585216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:57.524 [2024-12-12 20:30:41.585233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.573 ms 00:19:57.524 [2024-12-12 20:30:41.585242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.524 [2024-12-12 20:30:41.585695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.524 [2024-12-12 20:30:41.585787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:57.524 [2024-12-12 20:30:41.585803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.420 ms 00:19:57.524 [2024-12-12 20:30:41.585811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.524 [2024-12-12 20:30:41.589458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.524 [2024-12-12 20:30:41.589477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:57.524 [2024-12-12 20:30:41.589489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.615 ms 00:19:57.524 [2024-12-12 20:30:41.589498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.524 [2024-12-12 20:30:41.596506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.524 [2024-12-12 20:30:41.596608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:57.524 [2024-12-12 20:30:41.596625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.952 ms 00:19:57.524 [2024-12-12 20:30:41.596633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.524 [2024-12-12 20:30:41.619687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.524 [2024-12-12 20:30:41.619803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:57.525 [2024-12-12 20:30:41.619824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.983 ms 00:19:57.525 [2024-12-12 20:30:41.619832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.525 [2024-12-12 20:30:41.634754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.525 [2024-12-12 20:30:41.634869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:57.525 [2024-12-12 20:30:41.634889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.865 ms 00:19:57.525 [2024-12-12 20:30:41.634897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.525 [2024-12-12 20:30:41.635092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.525 [2024-12-12 20:30:41.635102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:57.525 [2024-12-12 20:30:41.635112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:19:57.525 [2024-12-12 20:30:41.635119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.525 [2024-12-12 20:30:41.657469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.525 [2024-12-12 20:30:41.657574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:57.525 [2024-12-12 20:30:41.657591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.321 ms 00:19:57.525 [2024-12-12 20:30:41.657599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.525 [2024-12-12 20:30:41.679940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.525 [2024-12-12 20:30:41.679969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:57.525 [2024-12-12 20:30:41.679983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.276 ms 00:19:57.525 [2024-12-12 20:30:41.679991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.525 [2024-12-12 20:30:41.701734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.525 [2024-12-12 20:30:41.701762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:57.525 [2024-12-12 20:30:41.701773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.673 ms 00:19:57.525 [2024-12-12 20:30:41.701780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.525 [2024-12-12 20:30:41.724579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.525 [2024-12-12 20:30:41.724698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:57.525 [2024-12-12 20:30:41.724717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.694 ms 00:19:57.525 [2024-12-12 20:30:41.724725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.525 [2024-12-12 20:30:41.724791] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:57.525 [2024-12-12 20:30:41.724806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:57.525 [2024-12-12 20:30:41.724817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:57.525 [2024-12-12 20:30:41.724825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:57.525 [2024-12-12 20:30:41.724835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:57.525 [2024-12-12 20:30:41.724842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:57.525 [2024-12-12 20:30:41.724854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:57.525 [2024-12-12 20:30:41.724861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:57.525 [2024-12-12 20:30:41.724875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:57.525 [2024-12-12 20:30:41.724887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:57.525 [2024-12-12 20:30:41.724900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:57.525 [2024-12-12 20:30:41.724908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:57.525 [2024-12-12 20:30:41.724917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:57.525 [2024-12-12 20:30:41.724924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:57.525 [2024-12-12 20:30:41.724933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:57.525 [2024-12-12 20:30:41.724940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:57.525 [2024-12-12 20:30:41.724949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:57.525 [2024-12-12 20:30:41.724956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:57.525 [2024-12-12 20:30:41.724965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:57.525 [2024-12-12 20:30:41.724972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:57.525 [2024-12-12 20:30:41.724996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:57.525 [2024-12-12 20:30:41.725003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:57.525 [2024-12-12 20:30:41.725014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:57.525 [2024-12-12 20:30:41.725022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:57.525 [2024-12-12 20:30:41.725030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:57.525 [2024-12-12 20:30:41.725038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:57.525 [2024-12-12 20:30:41.725047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:57.525 [2024-12-12 20:30:41.725054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:57.525 [2024-12-12 20:30:41.725064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:57.525 [2024-12-12 20:30:41.725072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:57.525 [2024-12-12 20:30:41.725082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:57.525 [2024-12-12 20:30:41.725089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:57.525 [2024-12-12 20:30:41.725098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:57.525 [2024-12-12 20:30:41.725106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:57.525 [2024-12-12 20:30:41.725115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:57.525 [2024-12-12 20:30:41.725122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:57.525 [2024-12-12 20:30:41.725131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:57.525 [2024-12-12 20:30:41.725138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:57.525 [2024-12-12 20:30:41.725149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:57.525 [2024-12-12 20:30:41.725156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:57.525 [2024-12-12 20:30:41.725165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:57.525 [2024-12-12 20:30:41.725172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:57.525 [2024-12-12 20:30:41.725181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:57.525 [2024-12-12 20:30:41.725188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:57.525 [2024-12-12 20:30:41.725197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:57.525 [2024-12-12 20:30:41.725204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:57.525 [2024-12-12 20:30:41.725214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:57.525 [2024-12-12 20:30:41.725221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:57.525 [2024-12-12 20:30:41.725230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:57.525 [2024-12-12 20:30:41.725237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:57.525 [2024-12-12 20:30:41.725246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:57.525 [2024-12-12 20:30:41.725253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:57.525 [2024-12-12 20:30:41.725262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:57.525 [2024-12-12 20:30:41.725270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:57.525 [2024-12-12 20:30:41.725280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:57.525 [2024-12-12 20:30:41.725287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:57.525 [2024-12-12 20:30:41.725296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:57.525 [2024-12-12 20:30:41.725303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:57.525 [2024-12-12 20:30:41.725312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:57.525 [2024-12-12 20:30:41.725319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:57.525 [2024-12-12 20:30:41.725328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:57.525 [2024-12-12 20:30:41.725335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:57.525 [2024-12-12 20:30:41.725345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:57.525 [2024-12-12 20:30:41.725352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:57.525 [2024-12-12 20:30:41.725361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:57.525 [2024-12-12 20:30:41.725368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:57.525 [2024-12-12 20:30:41.725377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:57.525 [2024-12-12 20:30:41.725385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:57.525 [2024-12-12 20:30:41.725394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:57.525 [2024-12-12 20:30:41.725401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:57.525 [2024-12-12 20:30:41.725428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:57.526 [2024-12-12 20:30:41.725436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:57.526 [2024-12-12 20:30:41.725446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:57.526 [2024-12-12 20:30:41.725454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:57.526 [2024-12-12 20:30:41.725463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:57.526 [2024-12-12 20:30:41.725470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:57.526 [2024-12-12 20:30:41.725479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:57.526 [2024-12-12 20:30:41.725486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:57.526 [2024-12-12 20:30:41.725495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:57.526 [2024-12-12 20:30:41.725503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:57.526 [2024-12-12 20:30:41.725512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:57.526 [2024-12-12 20:30:41.725519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:57.526 [2024-12-12 20:30:41.725528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:57.526 [2024-12-12 20:30:41.725536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:57.526 [2024-12-12 20:30:41.725545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:57.526 [2024-12-12 20:30:41.725552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:57.526 [2024-12-12 20:30:41.725563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:57.526 [2024-12-12 20:30:41.725570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:57.526 [2024-12-12 20:30:41.725579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:57.526 [2024-12-12 20:30:41.725596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:57.526 [2024-12-12 20:30:41.725605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:57.526 [2024-12-12 20:30:41.725612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:57.526 [2024-12-12 20:30:41.725620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:57.526 [2024-12-12 20:30:41.725627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:57.526 [2024-12-12 20:30:41.725641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:57.526 [2024-12-12 20:30:41.725649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:57.526 [2024-12-12 20:30:41.725657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:57.526 [2024-12-12 20:30:41.725665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:57.526 [2024-12-12 20:30:41.725673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:57.526 [2024-12-12 20:30:41.725680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:57.526 [2024-12-12 20:30:41.725691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:57.526 [2024-12-12 20:30:41.725706] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:57.526 [2024-12-12 20:30:41.725717] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6ce9d3d9-3dbe-48b1-9531-60bd0b669e2c 00:19:57.526 [2024-12-12 20:30:41.725725] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:57.526 [2024-12-12 20:30:41.725734] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:57.526 [2024-12-12 20:30:41.725742] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:57.526 [2024-12-12 20:30:41.725751] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:57.526 [2024-12-12 20:30:41.725758] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:57.526 [2024-12-12 20:30:41.725767] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:57.526 [2024-12-12 20:30:41.725774] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:57.526 [2024-12-12 20:30:41.725782] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:57.526 [2024-12-12 20:30:41.725788] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:57.526 [2024-12-12 20:30:41.725797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.526 [2024-12-12 20:30:41.725804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:57.526 [2024-12-12 20:30:41.725814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.008 ms 00:19:57.526 [2024-12-12 20:30:41.725820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.526 [2024-12-12 20:30:41.738322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.526 [2024-12-12 20:30:41.738352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:57.526 [2024-12-12 20:30:41.738366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.459 ms 00:19:57.526 [2024-12-12 20:30:41.738374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.526 [2024-12-12 20:30:41.738758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.526 [2024-12-12 20:30:41.738773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:57.526 [2024-12-12 20:30:41.738783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.314 ms 00:19:57.526 [2024-12-12 20:30:41.738790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.785 [2024-12-12 20:30:41.781894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.785 [2024-12-12 20:30:41.781936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:57.785 [2024-12-12 20:30:41.781949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.785 [2024-12-12 20:30:41.781957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.785 [2024-12-12 20:30:41.782071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.785 [2024-12-12 20:30:41.782080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:57.785 [2024-12-12 20:30:41.782090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.785 [2024-12-12 20:30:41.782097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.785 [2024-12-12 20:30:41.782157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.785 [2024-12-12 20:30:41.782168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:57.785 [2024-12-12 20:30:41.782180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.785 [2024-12-12 20:30:41.782187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.785 [2024-12-12 20:30:41.782214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.785 [2024-12-12 20:30:41.782222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:57.785 [2024-12-12 20:30:41.782231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.785 [2024-12-12 20:30:41.782238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.785 [2024-12-12 20:30:41.862132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.785 [2024-12-12 20:30:41.862179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:57.785 [2024-12-12 20:30:41.862192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.785 [2024-12-12 20:30:41.862200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.785 [2024-12-12 20:30:41.924365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.785 [2024-12-12 20:30:41.924539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:57.785 [2024-12-12 20:30:41.924558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.785 [2024-12-12 20:30:41.924567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.785 [2024-12-12 20:30:41.924671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.785 [2024-12-12 20:30:41.924681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:57.785 [2024-12-12 20:30:41.924696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.785 [2024-12-12 20:30:41.924704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.785 [2024-12-12 20:30:41.924759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.785 [2024-12-12 20:30:41.924768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:57.785 [2024-12-12 20:30:41.924777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.785 [2024-12-12 20:30:41.924784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.785 [2024-12-12 20:30:41.924903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.785 [2024-12-12 20:30:41.924917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:57.785 [2024-12-12 20:30:41.924927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.785 [2024-12-12 20:30:41.924936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.785 [2024-12-12 20:30:41.924988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.785 [2024-12-12 20:30:41.924997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:57.785 [2024-12-12 20:30:41.925006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.785 [2024-12-12 20:30:41.925014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.785 [2024-12-12 20:30:41.925063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.785 [2024-12-12 20:30:41.925071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:57.785 [2024-12-12 20:30:41.925083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.785 [2024-12-12 20:30:41.925091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.785 [2024-12-12 20:30:41.925138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.785 [2024-12-12 20:30:41.925147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:57.785 [2024-12-12 20:30:41.925156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.785 [2024-12-12 20:30:41.925163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.785 [2024-12-12 20:30:41.925331] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 342.926 ms, result 0 00:19:57.785 true 00:19:57.785 20:30:41 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 78077 00:19:57.785 20:30:41 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78077 ']' 00:19:57.785 20:30:41 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78077 00:19:57.785 20:30:41 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:19:57.785 20:30:41 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:57.785 20:30:41 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78077 00:19:57.785 killing process with pid 78077 00:19:57.785 20:30:41 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:57.785 20:30:41 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:57.785 20:30:41 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78077' 00:19:57.785 20:30:41 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78077 00:19:57.785 20:30:41 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78077 00:20:05.889 20:30:49 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:20:06.824 65536+0 records in 00:20:06.824 65536+0 records out 00:20:06.824 268435456 bytes (268 MB, 256 MiB) copied, 1.07051 s, 251 MB/s 00:20:06.824 20:30:50 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:06.824 [2024-12-12 20:30:50.811055] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:20:06.824 [2024-12-12 20:30:50.811164] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78255 ] 00:20:06.824 [2024-12-12 20:30:50.974862] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:07.083 [2024-12-12 20:30:51.092345] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:07.342 [2024-12-12 20:30:51.348912] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:07.342 [2024-12-12 20:30:51.348975] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:07.342 [2024-12-12 20:30:51.504512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.342 [2024-12-12 20:30:51.504561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:07.342 [2024-12-12 20:30:51.504574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:07.342 [2024-12-12 20:30:51.504582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.342 [2024-12-12 20:30:51.507209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.342 [2024-12-12 20:30:51.507243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:07.342 [2024-12-12 20:30:51.507252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.608 ms 00:20:07.342 [2024-12-12 20:30:51.507259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.342 [2024-12-12 20:30:51.507327] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:07.342 [2024-12-12 20:30:51.508005] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:07.342 [2024-12-12 20:30:51.508030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.342 [2024-12-12 20:30:51.508038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:07.342 [2024-12-12 20:30:51.508047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.710 ms 00:20:07.342 [2024-12-12 20:30:51.508054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.342 [2024-12-12 20:30:51.509123] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:07.342 [2024-12-12 20:30:51.521253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.342 [2024-12-12 20:30:51.521283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:07.342 [2024-12-12 20:30:51.521294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.130 ms 00:20:07.342 [2024-12-12 20:30:51.521302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.342 [2024-12-12 20:30:51.521390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.342 [2024-12-12 20:30:51.521401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:07.342 [2024-12-12 20:30:51.521410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:20:07.342 [2024-12-12 20:30:51.521429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.342 [2024-12-12 20:30:51.526041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.342 [2024-12-12 20:30:51.526069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:07.342 [2024-12-12 20:30:51.526078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.570 ms 00:20:07.342 [2024-12-12 20:30:51.526085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.342 [2024-12-12 20:30:51.526168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.342 [2024-12-12 20:30:51.526178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:07.342 [2024-12-12 20:30:51.526187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:20:07.342 [2024-12-12 20:30:51.526194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.342 [2024-12-12 20:30:51.526221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.342 [2024-12-12 20:30:51.526229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:07.342 [2024-12-12 20:30:51.526236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:07.342 [2024-12-12 20:30:51.526244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.342 [2024-12-12 20:30:51.526262] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:07.342 [2024-12-12 20:30:51.529518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.342 [2024-12-12 20:30:51.529544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:07.342 [2024-12-12 20:30:51.529552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.260 ms 00:20:07.342 [2024-12-12 20:30:51.529560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.342 [2024-12-12 20:30:51.529595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.342 [2024-12-12 20:30:51.529603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:07.342 [2024-12-12 20:30:51.529611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:07.342 [2024-12-12 20:30:51.529618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.342 [2024-12-12 20:30:51.529637] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:07.342 [2024-12-12 20:30:51.529655] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:07.342 [2024-12-12 20:30:51.529688] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:07.342 [2024-12-12 20:30:51.529702] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:07.342 [2024-12-12 20:30:51.529803] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:07.342 [2024-12-12 20:30:51.529813] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:07.342 [2024-12-12 20:30:51.529823] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:07.342 [2024-12-12 20:30:51.529835] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:07.342 [2024-12-12 20:30:51.529844] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:07.342 [2024-12-12 20:30:51.529851] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:07.342 [2024-12-12 20:30:51.529858] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:07.342 [2024-12-12 20:30:51.529865] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:07.342 [2024-12-12 20:30:51.529872] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:07.342 [2024-12-12 20:30:51.529879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.342 [2024-12-12 20:30:51.529886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:07.342 [2024-12-12 20:30:51.529894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.244 ms 00:20:07.342 [2024-12-12 20:30:51.529901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.342 [2024-12-12 20:30:51.529988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.342 [2024-12-12 20:30:51.529998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:07.343 [2024-12-12 20:30:51.530006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:07.343 [2024-12-12 20:30:51.530012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.343 [2024-12-12 20:30:51.530109] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:07.343 [2024-12-12 20:30:51.530117] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:07.343 [2024-12-12 20:30:51.530125] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:07.343 [2024-12-12 20:30:51.530132] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:07.343 [2024-12-12 20:30:51.530140] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:07.343 [2024-12-12 20:30:51.530146] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:07.343 [2024-12-12 20:30:51.530153] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:07.343 [2024-12-12 20:30:51.530160] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:07.343 [2024-12-12 20:30:51.530167] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:07.343 [2024-12-12 20:30:51.530174] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:07.343 [2024-12-12 20:30:51.530181] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:07.343 [2024-12-12 20:30:51.530192] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:07.343 [2024-12-12 20:30:51.530199] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:07.343 [2024-12-12 20:30:51.530205] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:07.343 [2024-12-12 20:30:51.530212] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:07.343 [2024-12-12 20:30:51.530218] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:07.343 [2024-12-12 20:30:51.530225] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:07.343 [2024-12-12 20:30:51.530236] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:07.343 [2024-12-12 20:30:51.530243] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:07.343 [2024-12-12 20:30:51.530250] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:07.343 [2024-12-12 20:30:51.530256] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:07.343 [2024-12-12 20:30:51.530263] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:07.343 [2024-12-12 20:30:51.530269] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:07.343 [2024-12-12 20:30:51.530275] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:07.343 [2024-12-12 20:30:51.530282] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:07.343 [2024-12-12 20:30:51.530288] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:07.343 [2024-12-12 20:30:51.530294] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:07.343 [2024-12-12 20:30:51.530300] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:07.343 [2024-12-12 20:30:51.530307] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:07.343 [2024-12-12 20:30:51.530313] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:07.343 [2024-12-12 20:30:51.530319] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:07.343 [2024-12-12 20:30:51.530326] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:07.343 [2024-12-12 20:30:51.530332] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:07.343 [2024-12-12 20:30:51.530339] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:07.343 [2024-12-12 20:30:51.530345] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:07.343 [2024-12-12 20:30:51.530351] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:07.343 [2024-12-12 20:30:51.530357] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:07.343 [2024-12-12 20:30:51.530364] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:07.343 [2024-12-12 20:30:51.530370] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:07.343 [2024-12-12 20:30:51.530377] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:07.343 [2024-12-12 20:30:51.530383] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:07.343 [2024-12-12 20:30:51.530390] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:07.343 [2024-12-12 20:30:51.530396] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:07.343 [2024-12-12 20:30:51.530403] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:07.343 [2024-12-12 20:30:51.530410] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:07.343 [2024-12-12 20:30:51.530703] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:07.343 [2024-12-12 20:30:51.530724] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:07.343 [2024-12-12 20:30:51.530743] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:07.343 [2024-12-12 20:30:51.530804] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:07.343 [2024-12-12 20:30:51.530831] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:07.343 [2024-12-12 20:30:51.530849] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:07.343 [2024-12-12 20:30:51.530867] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:07.343 [2024-12-12 20:30:51.530914] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:07.343 [2024-12-12 20:30:51.530936] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:07.343 [2024-12-12 20:30:51.530968] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:07.343 [2024-12-12 20:30:51.531149] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:07.343 [2024-12-12 20:30:51.531177] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:07.343 [2024-12-12 20:30:51.531203] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:07.343 [2024-12-12 20:30:51.531231] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:07.343 [2024-12-12 20:30:51.531259] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:07.343 [2024-12-12 20:30:51.531317] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:07.343 [2024-12-12 20:30:51.531349] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:07.343 [2024-12-12 20:30:51.531375] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:07.343 [2024-12-12 20:30:51.531403] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:07.343 [2024-12-12 20:30:51.531442] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:07.343 [2024-12-12 20:30:51.531543] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:07.343 [2024-12-12 20:30:51.531572] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:07.343 [2024-12-12 20:30:51.531599] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:07.343 [2024-12-12 20:30:51.531626] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:07.343 [2024-12-12 20:30:51.531634] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:07.343 [2024-12-12 20:30:51.531642] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:07.343 [2024-12-12 20:30:51.531651] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:07.343 [2024-12-12 20:30:51.531658] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:07.343 [2024-12-12 20:30:51.531665] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:07.343 [2024-12-12 20:30:51.531673] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:07.343 [2024-12-12 20:30:51.531681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.343 [2024-12-12 20:30:51.531692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:07.343 [2024-12-12 20:30:51.531700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.640 ms 00:20:07.343 [2024-12-12 20:30:51.531707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.343 [2024-12-12 20:30:51.557129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.343 [2024-12-12 20:30:51.557247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:07.343 [2024-12-12 20:30:51.557299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.331 ms 00:20:07.343 [2024-12-12 20:30:51.557321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.343 [2024-12-12 20:30:51.557471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.343 [2024-12-12 20:30:51.557498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:07.343 [2024-12-12 20:30:51.557518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:20:07.343 [2024-12-12 20:30:51.557588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.602 [2024-12-12 20:30:51.597200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.602 [2024-12-12 20:30:51.597339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:07.602 [2024-12-12 20:30:51.597427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.571 ms 00:20:07.602 [2024-12-12 20:30:51.597455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.602 [2024-12-12 20:30:51.597637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.602 [2024-12-12 20:30:51.597667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:07.602 [2024-12-12 20:30:51.597731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:07.602 [2024-12-12 20:30:51.597754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.602 [2024-12-12 20:30:51.598075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.602 [2024-12-12 20:30:51.598164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:07.602 [2024-12-12 20:30:51.598213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.286 ms 00:20:07.602 [2024-12-12 20:30:51.598239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.602 [2024-12-12 20:30:51.598372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.602 [2024-12-12 20:30:51.598466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:07.602 [2024-12-12 20:30:51.598509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:20:07.602 [2024-12-12 20:30:51.598528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.602 [2024-12-12 20:30:51.611755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.602 [2024-12-12 20:30:51.611865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:07.602 [2024-12-12 20:30:51.612003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.180 ms 00:20:07.602 [2024-12-12 20:30:51.612029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.602 [2024-12-12 20:30:51.625308] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:20:07.602 [2024-12-12 20:30:51.625479] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:07.602 [2024-12-12 20:30:51.625546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.602 [2024-12-12 20:30:51.625567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:07.602 [2024-12-12 20:30:51.625588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.374 ms 00:20:07.602 [2024-12-12 20:30:51.625607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.602 [2024-12-12 20:30:51.650507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.602 [2024-12-12 20:30:51.650632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:07.602 [2024-12-12 20:30:51.650692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.813 ms 00:20:07.602 [2024-12-12 20:30:51.650715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.602 [2024-12-12 20:30:51.662321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.602 [2024-12-12 20:30:51.662443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:07.602 [2024-12-12 20:30:51.662495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.525 ms 00:20:07.602 [2024-12-12 20:30:51.662516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.602 [2024-12-12 20:30:51.673730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.602 [2024-12-12 20:30:51.673843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:07.602 [2024-12-12 20:30:51.673892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.108 ms 00:20:07.602 [2024-12-12 20:30:51.673913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.602 [2024-12-12 20:30:51.674559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.602 [2024-12-12 20:30:51.674643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:07.602 [2024-12-12 20:30:51.674692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.539 ms 00:20:07.602 [2024-12-12 20:30:51.674713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.602 [2024-12-12 20:30:51.729404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.602 [2024-12-12 20:30:51.729575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:07.602 [2024-12-12 20:30:51.729634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.651 ms 00:20:07.602 [2024-12-12 20:30:51.729657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.602 [2024-12-12 20:30:51.740074] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:07.602 [2024-12-12 20:30:51.753361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.603 [2024-12-12 20:30:51.753507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:07.603 [2024-12-12 20:30:51.753558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.344 ms 00:20:07.603 [2024-12-12 20:30:51.753580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.603 [2024-12-12 20:30:51.753673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.603 [2024-12-12 20:30:51.753699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:07.603 [2024-12-12 20:30:51.753719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:07.603 [2024-12-12 20:30:51.753737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.603 [2024-12-12 20:30:51.753797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.603 [2024-12-12 20:30:51.753819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:07.603 [2024-12-12 20:30:51.753840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:20:07.603 [2024-12-12 20:30:51.753913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.603 [2024-12-12 20:30:51.753962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.603 [2024-12-12 20:30:51.753986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:07.603 [2024-12-12 20:30:51.754005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:07.603 [2024-12-12 20:30:51.754023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.603 [2024-12-12 20:30:51.754065] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:07.603 [2024-12-12 20:30:51.754130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.603 [2024-12-12 20:30:51.754152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:07.603 [2024-12-12 20:30:51.754171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:20:07.603 [2024-12-12 20:30:51.754189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.603 [2024-12-12 20:30:51.776734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.603 [2024-12-12 20:30:51.776851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:07.603 [2024-12-12 20:30:51.776900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.509 ms 00:20:07.603 [2024-12-12 20:30:51.776922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.603 [2024-12-12 20:30:51.777012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.603 [2024-12-12 20:30:51.777038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:07.603 [2024-12-12 20:30:51.777058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:20:07.603 [2024-12-12 20:30:51.777076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.603 [2024-12-12 20:30:51.777880] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:07.603 [2024-12-12 20:30:51.780827] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 273.084 ms, result 0 00:20:07.603 [2024-12-12 20:30:51.781452] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:07.603 [2024-12-12 20:30:51.794243] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:08.989  [2024-12-12T20:30:54.150Z] Copying: 42/256 [MB] (42 MBps) [2024-12-12T20:30:55.111Z] Copying: 83/256 [MB] (41 MBps) [2024-12-12T20:30:56.050Z] Copying: 125/256 [MB] (41 MBps) [2024-12-12T20:30:56.984Z] Copying: 168/256 [MB] (42 MBps) [2024-12-12T20:30:57.919Z] Copying: 210/256 [MB] (41 MBps) [2024-12-12T20:30:57.919Z] Copying: 253/256 [MB] (43 MBps) [2024-12-12T20:30:57.919Z] Copying: 256/256 [MB] (average 42 MBps)[2024-12-12 20:30:57.854675] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:13.691 [2024-12-12 20:30:57.863710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.691 [2024-12-12 20:30:57.863770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:13.691 [2024-12-12 20:30:57.863799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:13.691 [2024-12-12 20:30:57.863818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.691 [2024-12-12 20:30:57.863856] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:13.691 [2024-12-12 20:30:57.866524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.691 [2024-12-12 20:30:57.866629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:13.691 [2024-12-12 20:30:57.866684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.587 ms 00:20:13.691 [2024-12-12 20:30:57.866706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.691 [2024-12-12 20:30:57.868120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.691 [2024-12-12 20:30:57.868224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:13.691 [2024-12-12 20:30:57.868281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.381 ms 00:20:13.691 [2024-12-12 20:30:57.868303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.691 [2024-12-12 20:30:57.875264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.691 [2024-12-12 20:30:57.875370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:13.691 [2024-12-12 20:30:57.875438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.930 ms 00:20:13.691 [2024-12-12 20:30:57.875461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.691 [2024-12-12 20:30:57.881701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.691 [2024-12-12 20:30:57.881782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:13.691 [2024-12-12 20:30:57.881843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.189 ms 00:20:13.691 [2024-12-12 20:30:57.881860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.691 [2024-12-12 20:30:57.899095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.691 [2024-12-12 20:30:57.899180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:13.691 [2024-12-12 20:30:57.899221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.159 ms 00:20:13.691 [2024-12-12 20:30:57.899237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.691 [2024-12-12 20:30:57.910706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.691 [2024-12-12 20:30:57.910794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:13.691 [2024-12-12 20:30:57.910840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.422 ms 00:20:13.691 [2024-12-12 20:30:57.910859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.691 [2024-12-12 20:30:57.910976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.691 [2024-12-12 20:30:57.911013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:13.691 [2024-12-12 20:30:57.911045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:13.691 [2024-12-12 20:30:57.911067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.951 [2024-12-12 20:30:57.929264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.951 [2024-12-12 20:30:57.929346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:13.951 [2024-12-12 20:30:57.929384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.173 ms 00:20:13.951 [2024-12-12 20:30:57.929401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.951 [2024-12-12 20:30:57.946793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.951 [2024-12-12 20:30:57.946878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:13.951 [2024-12-12 20:30:57.946917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.340 ms 00:20:13.951 [2024-12-12 20:30:57.946933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.951 [2024-12-12 20:30:57.963913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.951 [2024-12-12 20:30:57.963991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:13.951 [2024-12-12 20:30:57.964033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.936 ms 00:20:13.951 [2024-12-12 20:30:57.964049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.951 [2024-12-12 20:30:57.981090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.951 [2024-12-12 20:30:57.981114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:13.951 [2024-12-12 20:30:57.981123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.938 ms 00:20:13.951 [2024-12-12 20:30:57.981128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.951 [2024-12-12 20:30:57.981155] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:13.951 [2024-12-12 20:30:57.981166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:13.951 [2024-12-12 20:30:57.981174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:13.951 [2024-12-12 20:30:57.981181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:13.951 [2024-12-12 20:30:57.981187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:13.951 [2024-12-12 20:30:57.981193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:13.951 [2024-12-12 20:30:57.981199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:13.951 [2024-12-12 20:30:57.981206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:13.951 [2024-12-12 20:30:57.981212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:13.951 [2024-12-12 20:30:57.981218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:13.951 [2024-12-12 20:30:57.981224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:13.951 [2024-12-12 20:30:57.981230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:13.951 [2024-12-12 20:30:57.981236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:13.951 [2024-12-12 20:30:57.981242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:13.951 [2024-12-12 20:30:57.981247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:13.951 [2024-12-12 20:30:57.981253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:13.951 [2024-12-12 20:30:57.981259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:13.951 [2024-12-12 20:30:57.981266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:13.951 [2024-12-12 20:30:57.981271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:13.951 [2024-12-12 20:30:57.981277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:13.951 [2024-12-12 20:30:57.981283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:13.951 [2024-12-12 20:30:57.981288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:13.951 [2024-12-12 20:30:57.981294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:13.951 [2024-12-12 20:30:57.981300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:13.951 [2024-12-12 20:30:57.981305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:13.951 [2024-12-12 20:30:57.981311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:13.951 [2024-12-12 20:30:57.981317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:13.951 [2024-12-12 20:30:57.981324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:13.951 [2024-12-12 20:30:57.981329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:13.951 [2024-12-12 20:30:57.981335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:13.951 [2024-12-12 20:30:57.981341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:13.951 [2024-12-12 20:30:57.981347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:13.951 [2024-12-12 20:30:57.981358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:13.951 [2024-12-12 20:30:57.981364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:13.951 [2024-12-12 20:30:57.981370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:13.951 [2024-12-12 20:30:57.981376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:13.951 [2024-12-12 20:30:57.981382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:13.951 [2024-12-12 20:30:57.981388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:13.951 [2024-12-12 20:30:57.981393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:13.951 [2024-12-12 20:30:57.981399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:13.951 [2024-12-12 20:30:57.981405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:13.951 [2024-12-12 20:30:57.981411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:13.951 [2024-12-12 20:30:57.981434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:13.951 [2024-12-12 20:30:57.981439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:13.951 [2024-12-12 20:30:57.981445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:13.951 [2024-12-12 20:30:57.981451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:13.951 [2024-12-12 20:30:57.981457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:13.951 [2024-12-12 20:30:57.981462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:13.951 [2024-12-12 20:30:57.981468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:13.951 [2024-12-12 20:30:57.981474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:13.951 [2024-12-12 20:30:57.981479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:13.951 [2024-12-12 20:30:57.981485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:13.952 [2024-12-12 20:30:57.981491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:13.952 [2024-12-12 20:30:57.981497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:13.952 [2024-12-12 20:30:57.981503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:13.952 [2024-12-12 20:30:57.981509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:13.952 [2024-12-12 20:30:57.981515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:13.952 [2024-12-12 20:30:57.981521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:13.952 [2024-12-12 20:30:57.981527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:13.952 [2024-12-12 20:30:57.981533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:13.952 [2024-12-12 20:30:57.981539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:13.952 [2024-12-12 20:30:57.981544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:13.952 [2024-12-12 20:30:57.981550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:13.952 [2024-12-12 20:30:57.981556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:13.952 [2024-12-12 20:30:57.981564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:13.952 [2024-12-12 20:30:57.981570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:13.952 [2024-12-12 20:30:57.981575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:13.952 [2024-12-12 20:30:57.981581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:13.952 [2024-12-12 20:30:57.981587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:13.952 [2024-12-12 20:30:57.981593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:13.952 [2024-12-12 20:30:57.981599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:13.952 [2024-12-12 20:30:57.981604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:13.952 [2024-12-12 20:30:57.981610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:13.952 [2024-12-12 20:30:57.981616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:13.952 [2024-12-12 20:30:57.981621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:13.952 [2024-12-12 20:30:57.981627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:13.952 [2024-12-12 20:30:57.981633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:13.952 [2024-12-12 20:30:57.981639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:13.952 [2024-12-12 20:30:57.981644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:13.952 [2024-12-12 20:30:57.981650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:13.952 [2024-12-12 20:30:57.981656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:13.952 [2024-12-12 20:30:57.981661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:13.952 [2024-12-12 20:30:57.981667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:13.952 [2024-12-12 20:30:57.981673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:13.952 [2024-12-12 20:30:57.981678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:13.952 [2024-12-12 20:30:57.981684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:13.952 [2024-12-12 20:30:57.981690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:13.952 [2024-12-12 20:30:57.981696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:13.952 [2024-12-12 20:30:57.981702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:13.952 [2024-12-12 20:30:57.981708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:13.952 [2024-12-12 20:30:57.981713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:13.952 [2024-12-12 20:30:57.981719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:13.952 [2024-12-12 20:30:57.981725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:13.952 [2024-12-12 20:30:57.981730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:13.952 [2024-12-12 20:30:57.981736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:13.952 [2024-12-12 20:30:57.981747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:13.952 [2024-12-12 20:30:57.981755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:13.952 [2024-12-12 20:30:57.981761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:13.952 [2024-12-12 20:30:57.981766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:13.952 [2024-12-12 20:30:57.981772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:13.952 [2024-12-12 20:30:57.981778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:13.952 [2024-12-12 20:30:57.981791] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:13.952 [2024-12-12 20:30:57.981797] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6ce9d3d9-3dbe-48b1-9531-60bd0b669e2c 00:20:13.952 [2024-12-12 20:30:57.981804] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:13.952 [2024-12-12 20:30:57.981821] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:13.952 [2024-12-12 20:30:57.981828] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:13.952 [2024-12-12 20:30:57.981833] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:13.952 [2024-12-12 20:30:57.981839] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:13.952 [2024-12-12 20:30:57.981845] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:13.952 [2024-12-12 20:30:57.981851] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:13.952 [2024-12-12 20:30:57.981856] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:13.952 [2024-12-12 20:30:57.981861] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:13.952 [2024-12-12 20:30:57.981866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.952 [2024-12-12 20:30:57.981875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:13.952 [2024-12-12 20:30:57.981881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.712 ms 00:20:13.952 [2024-12-12 20:30:57.981887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.952 [2024-12-12 20:30:57.991760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.952 [2024-12-12 20:30:57.991841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:13.952 [2024-12-12 20:30:57.991879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.859 ms 00:20:13.952 [2024-12-12 20:30:57.991896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.952 [2024-12-12 20:30:57.992186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.952 [2024-12-12 20:30:57.992246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:13.952 [2024-12-12 20:30:57.992282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.256 ms 00:20:13.952 [2024-12-12 20:30:57.992298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.952 [2024-12-12 20:30:58.020073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.952 [2024-12-12 20:30:58.020174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:13.952 [2024-12-12 20:30:58.020217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.952 [2024-12-12 20:30:58.020233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.952 [2024-12-12 20:30:58.020305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.952 [2024-12-12 20:30:58.020352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:13.952 [2024-12-12 20:30:58.020370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.952 [2024-12-12 20:30:58.020385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.952 [2024-12-12 20:30:58.020475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.952 [2024-12-12 20:30:58.020496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:13.952 [2024-12-12 20:30:58.020539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.952 [2024-12-12 20:30:58.020557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.952 [2024-12-12 20:30:58.020580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.952 [2024-12-12 20:30:58.020621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:13.952 [2024-12-12 20:30:58.020638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.952 [2024-12-12 20:30:58.020653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.952 [2024-12-12 20:30:58.080603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.952 [2024-12-12 20:30:58.080721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:13.952 [2024-12-12 20:30:58.080763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.952 [2024-12-12 20:30:58.080782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.952 [2024-12-12 20:30:58.128785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.952 [2024-12-12 20:30:58.128904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:13.952 [2024-12-12 20:30:58.128942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.952 [2024-12-12 20:30:58.128961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.952 [2024-12-12 20:30:58.129017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.952 [2024-12-12 20:30:58.129035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:13.952 [2024-12-12 20:30:58.129050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.952 [2024-12-12 20:30:58.129065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.952 [2024-12-12 20:30:58.129096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.952 [2024-12-12 20:30:58.129151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:13.953 [2024-12-12 20:30:58.129173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.953 [2024-12-12 20:30:58.129188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.953 [2024-12-12 20:30:58.129275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.953 [2024-12-12 20:30:58.129321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:13.953 [2024-12-12 20:30:58.129341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.953 [2024-12-12 20:30:58.129355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.953 [2024-12-12 20:30:58.129394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.953 [2024-12-12 20:30:58.129451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:13.953 [2024-12-12 20:30:58.129471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.953 [2024-12-12 20:30:58.129490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.953 [2024-12-12 20:30:58.129530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.953 [2024-12-12 20:30:58.129547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:13.953 [2024-12-12 20:30:58.129562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.953 [2024-12-12 20:30:58.129576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.953 [2024-12-12 20:30:58.129655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.953 [2024-12-12 20:30:58.129676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:13.953 [2024-12-12 20:30:58.129695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.953 [2024-12-12 20:30:58.129710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.953 [2024-12-12 20:30:58.129899] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 266.173 ms, result 0 00:20:14.886 00:20:14.886 00:20:14.886 20:30:58 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=78342 00:20:14.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:14.886 20:30:58 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 78342 00:20:14.886 20:30:58 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78342 ']' 00:20:14.886 20:30:58 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:20:14.886 20:30:58 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:14.886 20:30:58 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:14.886 20:30:58 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:14.886 20:30:58 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:14.886 20:30:58 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:14.886 [2024-12-12 20:30:59.059836] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:20:14.886 [2024-12-12 20:30:59.059950] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78342 ] 00:20:15.144 [2024-12-12 20:30:59.215149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:15.144 [2024-12-12 20:30:59.298852] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:16.079 20:30:59 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:16.079 20:30:59 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:20:16.079 20:30:59 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:20:16.079 [2024-12-12 20:31:00.168091] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:16.079 [2024-12-12 20:31:00.168155] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:16.339 [2024-12-12 20:31:00.338278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.339 [2024-12-12 20:31:00.338328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:16.339 [2024-12-12 20:31:00.338343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:16.339 [2024-12-12 20:31:00.338351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.339 [2024-12-12 20:31:00.341036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.339 [2024-12-12 20:31:00.341071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:16.339 [2024-12-12 20:31:00.341082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.667 ms 00:20:16.339 [2024-12-12 20:31:00.341090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.339 [2024-12-12 20:31:00.341159] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:16.339 [2024-12-12 20:31:00.341875] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:16.339 [2024-12-12 20:31:00.342010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.339 [2024-12-12 20:31:00.342021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:16.339 [2024-12-12 20:31:00.342031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.857 ms 00:20:16.339 [2024-12-12 20:31:00.342038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.339 [2024-12-12 20:31:00.343283] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:16.339 [2024-12-12 20:31:00.355643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.339 [2024-12-12 20:31:00.355682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:16.339 [2024-12-12 20:31:00.355694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.361 ms 00:20:16.339 [2024-12-12 20:31:00.355704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.339 [2024-12-12 20:31:00.355787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.339 [2024-12-12 20:31:00.355800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:16.339 [2024-12-12 20:31:00.355809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:20:16.339 [2024-12-12 20:31:00.355817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.339 [2024-12-12 20:31:00.360941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.339 [2024-12-12 20:31:00.360978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:16.339 [2024-12-12 20:31:00.360987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.078 ms 00:20:16.339 [2024-12-12 20:31:00.360996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.339 [2024-12-12 20:31:00.361087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.339 [2024-12-12 20:31:00.361099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:16.339 [2024-12-12 20:31:00.361106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:20:16.339 [2024-12-12 20:31:00.361118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.339 [2024-12-12 20:31:00.361141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.339 [2024-12-12 20:31:00.361151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:16.339 [2024-12-12 20:31:00.361158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:16.339 [2024-12-12 20:31:00.361166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.339 [2024-12-12 20:31:00.361188] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:16.339 [2024-12-12 20:31:00.364525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.339 [2024-12-12 20:31:00.364550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:16.339 [2024-12-12 20:31:00.364561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.340 ms 00:20:16.339 [2024-12-12 20:31:00.364568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.339 [2024-12-12 20:31:00.364604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.339 [2024-12-12 20:31:00.364613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:16.339 [2024-12-12 20:31:00.364622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:16.339 [2024-12-12 20:31:00.364631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.339 [2024-12-12 20:31:00.364652] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:16.339 [2024-12-12 20:31:00.364670] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:16.339 [2024-12-12 20:31:00.364712] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:16.339 [2024-12-12 20:31:00.364727] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:16.339 [2024-12-12 20:31:00.364829] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:16.339 [2024-12-12 20:31:00.364839] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:16.339 [2024-12-12 20:31:00.364853] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:16.339 [2024-12-12 20:31:00.364862] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:16.339 [2024-12-12 20:31:00.364873] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:16.339 [2024-12-12 20:31:00.364880] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:16.339 [2024-12-12 20:31:00.364889] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:16.339 [2024-12-12 20:31:00.364896] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:16.339 [2024-12-12 20:31:00.364907] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:16.339 [2024-12-12 20:31:00.364915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.339 [2024-12-12 20:31:00.364923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:16.340 [2024-12-12 20:31:00.364930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.266 ms 00:20:16.340 [2024-12-12 20:31:00.364939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.340 [2024-12-12 20:31:00.365028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.340 [2024-12-12 20:31:00.365037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:16.340 [2024-12-12 20:31:00.365044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:16.340 [2024-12-12 20:31:00.365053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.340 [2024-12-12 20:31:00.365163] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:16.340 [2024-12-12 20:31:00.365174] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:16.340 [2024-12-12 20:31:00.365182] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:16.340 [2024-12-12 20:31:00.365192] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:16.340 [2024-12-12 20:31:00.365199] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:16.340 [2024-12-12 20:31:00.365209] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:16.340 [2024-12-12 20:31:00.365216] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:16.340 [2024-12-12 20:31:00.365227] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:16.340 [2024-12-12 20:31:00.365234] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:16.340 [2024-12-12 20:31:00.365243] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:16.340 [2024-12-12 20:31:00.365249] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:16.340 [2024-12-12 20:31:00.365259] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:16.340 [2024-12-12 20:31:00.365265] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:16.340 [2024-12-12 20:31:00.365273] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:16.340 [2024-12-12 20:31:00.365280] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:16.340 [2024-12-12 20:31:00.365287] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:16.340 [2024-12-12 20:31:00.365294] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:16.340 [2024-12-12 20:31:00.365302] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:16.340 [2024-12-12 20:31:00.365314] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:16.340 [2024-12-12 20:31:00.365322] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:16.340 [2024-12-12 20:31:00.365328] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:16.340 [2024-12-12 20:31:00.365336] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:16.340 [2024-12-12 20:31:00.365343] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:16.340 [2024-12-12 20:31:00.365352] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:16.340 [2024-12-12 20:31:00.365358] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:16.340 [2024-12-12 20:31:00.365366] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:16.340 [2024-12-12 20:31:00.365373] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:16.340 [2024-12-12 20:31:00.365380] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:16.340 [2024-12-12 20:31:00.365387] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:16.340 [2024-12-12 20:31:00.365396] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:16.340 [2024-12-12 20:31:00.365402] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:16.340 [2024-12-12 20:31:00.365429] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:16.340 [2024-12-12 20:31:00.365437] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:16.340 [2024-12-12 20:31:00.365445] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:16.340 [2024-12-12 20:31:00.365452] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:16.340 [2024-12-12 20:31:00.365459] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:16.340 [2024-12-12 20:31:00.365466] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:16.340 [2024-12-12 20:31:00.365474] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:16.340 [2024-12-12 20:31:00.365481] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:16.340 [2024-12-12 20:31:00.365491] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:16.340 [2024-12-12 20:31:00.365497] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:16.340 [2024-12-12 20:31:00.365505] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:16.340 [2024-12-12 20:31:00.365512] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:16.340 [2024-12-12 20:31:00.365521] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:16.340 [2024-12-12 20:31:00.365531] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:16.340 [2024-12-12 20:31:00.365539] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:16.340 [2024-12-12 20:31:00.365546] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:16.340 [2024-12-12 20:31:00.365555] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:16.340 [2024-12-12 20:31:00.365562] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:16.340 [2024-12-12 20:31:00.365570] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:16.340 [2024-12-12 20:31:00.365577] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:16.340 [2024-12-12 20:31:00.365585] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:16.340 [2024-12-12 20:31:00.365592] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:16.340 [2024-12-12 20:31:00.365601] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:16.340 [2024-12-12 20:31:00.365610] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:16.340 [2024-12-12 20:31:00.365622] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:16.340 [2024-12-12 20:31:00.365630] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:16.340 [2024-12-12 20:31:00.365638] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:16.340 [2024-12-12 20:31:00.365646] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:16.340 [2024-12-12 20:31:00.365654] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:16.340 [2024-12-12 20:31:00.365661] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:16.340 [2024-12-12 20:31:00.365669] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:16.340 [2024-12-12 20:31:00.365676] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:16.340 [2024-12-12 20:31:00.365685] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:16.340 [2024-12-12 20:31:00.365692] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:16.340 [2024-12-12 20:31:00.365700] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:16.340 [2024-12-12 20:31:00.365707] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:16.340 [2024-12-12 20:31:00.365715] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:16.340 [2024-12-12 20:31:00.365723] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:16.340 [2024-12-12 20:31:00.365731] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:16.340 [2024-12-12 20:31:00.365739] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:16.340 [2024-12-12 20:31:00.365750] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:16.340 [2024-12-12 20:31:00.365758] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:16.340 [2024-12-12 20:31:00.365767] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:16.340 [2024-12-12 20:31:00.365774] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:16.340 [2024-12-12 20:31:00.365786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.340 [2024-12-12 20:31:00.365793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:16.340 [2024-12-12 20:31:00.365802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.690 ms 00:20:16.340 [2024-12-12 20:31:00.365811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.340 [2024-12-12 20:31:00.391821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.340 [2024-12-12 20:31:00.391969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:16.340 [2024-12-12 20:31:00.391989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.938 ms 00:20:16.340 [2024-12-12 20:31:00.391999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.340 [2024-12-12 20:31:00.392119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.340 [2024-12-12 20:31:00.392129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:16.340 [2024-12-12 20:31:00.392138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:20:16.340 [2024-12-12 20:31:00.392145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.340 [2024-12-12 20:31:00.422548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.340 [2024-12-12 20:31:00.422681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:16.340 [2024-12-12 20:31:00.422699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.378 ms 00:20:16.340 [2024-12-12 20:31:00.422706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.340 [2024-12-12 20:31:00.422766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.340 [2024-12-12 20:31:00.422775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:16.340 [2024-12-12 20:31:00.422784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:16.340 [2024-12-12 20:31:00.422791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.340 [2024-12-12 20:31:00.423105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.340 [2024-12-12 20:31:00.423118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:16.340 [2024-12-12 20:31:00.423130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.292 ms 00:20:16.341 [2024-12-12 20:31:00.423138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.341 [2024-12-12 20:31:00.423258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.341 [2024-12-12 20:31:00.423267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:16.341 [2024-12-12 20:31:00.423276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:20:16.341 [2024-12-12 20:31:00.423283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.341 [2024-12-12 20:31:00.437624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.341 [2024-12-12 20:31:00.437741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:16.341 [2024-12-12 20:31:00.437759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.317 ms 00:20:16.341 [2024-12-12 20:31:00.437768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.341 [2024-12-12 20:31:00.468593] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:16.341 [2024-12-12 20:31:00.468632] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:16.341 [2024-12-12 20:31:00.468648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.341 [2024-12-12 20:31:00.468657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:16.341 [2024-12-12 20:31:00.468668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.765 ms 00:20:16.341 [2024-12-12 20:31:00.468681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.341 [2024-12-12 20:31:00.493181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.341 [2024-12-12 20:31:00.493217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:16.341 [2024-12-12 20:31:00.493231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.426 ms 00:20:16.341 [2024-12-12 20:31:00.493239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.341 [2024-12-12 20:31:00.504471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.341 [2024-12-12 20:31:00.504604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:16.341 [2024-12-12 20:31:00.504626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.160 ms 00:20:16.341 [2024-12-12 20:31:00.504633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.341 [2024-12-12 20:31:00.515483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.341 [2024-12-12 20:31:00.515588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:16.341 [2024-12-12 20:31:00.515606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.784 ms 00:20:16.341 [2024-12-12 20:31:00.515613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.341 [2024-12-12 20:31:00.516236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.341 [2024-12-12 20:31:00.516254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:16.341 [2024-12-12 20:31:00.516265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.533 ms 00:20:16.341 [2024-12-12 20:31:00.516272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.600 [2024-12-12 20:31:00.570943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.600 [2024-12-12 20:31:00.571112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:16.600 [2024-12-12 20:31:00.571134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.643 ms 00:20:16.600 [2024-12-12 20:31:00.571142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.600 [2024-12-12 20:31:00.581719] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:16.600 [2024-12-12 20:31:00.596064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.600 [2024-12-12 20:31:00.596187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:16.600 [2024-12-12 20:31:00.596241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.824 ms 00:20:16.600 [2024-12-12 20:31:00.596266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.600 [2024-12-12 20:31:00.596378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.600 [2024-12-12 20:31:00.596409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:16.600 [2024-12-12 20:31:00.596447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:16.600 [2024-12-12 20:31:00.596512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.600 [2024-12-12 20:31:00.596577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.600 [2024-12-12 20:31:00.596600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:16.600 [2024-12-12 20:31:00.596647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:20:16.600 [2024-12-12 20:31:00.596673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.600 [2024-12-12 20:31:00.596708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.600 [2024-12-12 20:31:00.596730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:16.600 [2024-12-12 20:31:00.596775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:16.600 [2024-12-12 20:31:00.596800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.600 [2024-12-12 20:31:00.596845] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:16.600 [2024-12-12 20:31:00.596927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.600 [2024-12-12 20:31:00.596962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:16.600 [2024-12-12 20:31:00.596983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:20:16.600 [2024-12-12 20:31:00.597001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.600 [2024-12-12 20:31:00.619963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.600 [2024-12-12 20:31:00.620075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:16.600 [2024-12-12 20:31:00.620128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.923 ms 00:20:16.600 [2024-12-12 20:31:00.620168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.600 [2024-12-12 20:31:00.620731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.600 [2024-12-12 20:31:00.621041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:16.600 [2024-12-12 20:31:00.621220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:20:16.600 [2024-12-12 20:31:00.621404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.600 [2024-12-12 20:31:00.623524] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:16.600 [2024-12-12 20:31:00.632268] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 284.484 ms, result 0 00:20:16.600 [2024-12-12 20:31:00.633685] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:16.600 Some configs were skipped because the RPC state that can call them passed over. 00:20:16.600 20:31:00 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:20:16.859 [2024-12-12 20:31:00.860135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.859 [2024-12-12 20:31:00.860321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:16.859 [2024-12-12 20:31:00.860393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.280 ms 00:20:16.859 [2024-12-12 20:31:00.860430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.859 [2024-12-12 20:31:00.860483] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.630 ms, result 0 00:20:16.859 true 00:20:16.859 20:31:00 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:20:16.859 [2024-12-12 20:31:01.063994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.859 [2024-12-12 20:31:01.064148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:16.859 [2024-12-12 20:31:01.064206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.834 ms 00:20:16.859 [2024-12-12 20:31:01.064229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.859 [2024-12-12 20:31:01.064281] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.122 ms, result 0 00:20:16.859 true 00:20:16.859 20:31:01 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 78342 00:20:16.859 20:31:01 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78342 ']' 00:20:16.859 20:31:01 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78342 00:20:16.859 20:31:01 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:20:17.117 20:31:01 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:17.117 20:31:01 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78342 00:20:17.117 killing process with pid 78342 00:20:17.117 20:31:01 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:17.117 20:31:01 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:17.117 20:31:01 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78342' 00:20:17.117 20:31:01 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78342 00:20:17.117 20:31:01 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78342 00:20:17.685 [2024-12-12 20:31:01.806747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.685 [2024-12-12 20:31:01.806804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:17.685 [2024-12-12 20:31:01.806817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:17.685 [2024-12-12 20:31:01.806826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.685 [2024-12-12 20:31:01.806851] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:17.685 [2024-12-12 20:31:01.809453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.685 [2024-12-12 20:31:01.809482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:17.685 [2024-12-12 20:31:01.809497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.585 ms 00:20:17.685 [2024-12-12 20:31:01.809505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.685 [2024-12-12 20:31:01.809800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.685 [2024-12-12 20:31:01.809809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:17.685 [2024-12-12 20:31:01.809819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.269 ms 00:20:17.685 [2024-12-12 20:31:01.809827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.685 [2024-12-12 20:31:01.813822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.685 [2024-12-12 20:31:01.813849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:17.685 [2024-12-12 20:31:01.813863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.973 ms 00:20:17.685 [2024-12-12 20:31:01.813870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.685 [2024-12-12 20:31:01.820725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.685 [2024-12-12 20:31:01.820873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:17.685 [2024-12-12 20:31:01.820893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.820 ms 00:20:17.685 [2024-12-12 20:31:01.820900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.685 [2024-12-12 20:31:01.830972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.685 [2024-12-12 20:31:01.831011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:17.686 [2024-12-12 20:31:01.831024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.015 ms 00:20:17.686 [2024-12-12 20:31:01.831032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.686 [2024-12-12 20:31:01.838156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.686 [2024-12-12 20:31:01.838192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:17.686 [2024-12-12 20:31:01.838206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.084 ms 00:20:17.686 [2024-12-12 20:31:01.838214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.686 [2024-12-12 20:31:01.838361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.686 [2024-12-12 20:31:01.838372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:17.686 [2024-12-12 20:31:01.838382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:20:17.686 [2024-12-12 20:31:01.838389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.686 [2024-12-12 20:31:01.848138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.686 [2024-12-12 20:31:01.848277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:17.686 [2024-12-12 20:31:01.848295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.728 ms 00:20:17.686 [2024-12-12 20:31:01.848303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.686 [2024-12-12 20:31:01.857466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.686 [2024-12-12 20:31:01.857495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:17.686 [2024-12-12 20:31:01.857509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.114 ms 00:20:17.686 [2024-12-12 20:31:01.857516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.686 [2024-12-12 20:31:01.866533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.686 [2024-12-12 20:31:01.866651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:17.686 [2024-12-12 20:31:01.866668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.979 ms 00:20:17.686 [2024-12-12 20:31:01.866675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.686 [2024-12-12 20:31:01.875831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.686 [2024-12-12 20:31:01.875863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:17.686 [2024-12-12 20:31:01.875873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.097 ms 00:20:17.686 [2024-12-12 20:31:01.875880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.686 [2024-12-12 20:31:01.875913] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:17.686 [2024-12-12 20:31:01.875927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:17.686 [2024-12-12 20:31:01.875938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:17.686 [2024-12-12 20:31:01.875946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:17.686 [2024-12-12 20:31:01.875955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:17.686 [2024-12-12 20:31:01.875963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:17.686 [2024-12-12 20:31:01.875973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:17.686 [2024-12-12 20:31:01.875981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:17.686 [2024-12-12 20:31:01.875990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:17.686 [2024-12-12 20:31:01.875998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:17.686 [2024-12-12 20:31:01.876006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:17.686 [2024-12-12 20:31:01.876014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:17.686 [2024-12-12 20:31:01.876023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:17.686 [2024-12-12 20:31:01.876030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:17.686 [2024-12-12 20:31:01.876045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:17.686 [2024-12-12 20:31:01.876052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:17.686 [2024-12-12 20:31:01.876061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:17.686 [2024-12-12 20:31:01.876068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:17.686 [2024-12-12 20:31:01.876078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:17.686 [2024-12-12 20:31:01.876086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:17.686 [2024-12-12 20:31:01.876095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:17.686 [2024-12-12 20:31:01.876102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:17.686 [2024-12-12 20:31:01.876113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:17.686 [2024-12-12 20:31:01.876120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:17.686 [2024-12-12 20:31:01.876129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:17.686 [2024-12-12 20:31:01.876136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:17.686 [2024-12-12 20:31:01.876146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:17.686 [2024-12-12 20:31:01.876154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:17.686 [2024-12-12 20:31:01.876163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:17.686 [2024-12-12 20:31:01.876170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:17.686 [2024-12-12 20:31:01.876179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:17.686 [2024-12-12 20:31:01.876186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:17.686 [2024-12-12 20:31:01.876195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:17.686 [2024-12-12 20:31:01.876205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:17.686 [2024-12-12 20:31:01.876214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:17.686 [2024-12-12 20:31:01.876221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:17.686 [2024-12-12 20:31:01.876229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:17.686 [2024-12-12 20:31:01.876237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:17.686 [2024-12-12 20:31:01.876247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:17.686 [2024-12-12 20:31:01.876255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:17.686 [2024-12-12 20:31:01.876263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:17.686 [2024-12-12 20:31:01.876271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:17.686 [2024-12-12 20:31:01.876279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:17.686 [2024-12-12 20:31:01.876286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:17.686 [2024-12-12 20:31:01.876297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:17.686 [2024-12-12 20:31:01.876304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:17.686 [2024-12-12 20:31:01.876313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:17.686 [2024-12-12 20:31:01.876320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:17.686 [2024-12-12 20:31:01.876329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:17.686 [2024-12-12 20:31:01.876336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:17.686 [2024-12-12 20:31:01.876345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:17.686 [2024-12-12 20:31:01.876352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:17.686 [2024-12-12 20:31:01.876361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:17.686 [2024-12-12 20:31:01.876368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:17.686 [2024-12-12 20:31:01.876378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:17.686 [2024-12-12 20:31:01.876385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:17.686 [2024-12-12 20:31:01.876394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:17.686 [2024-12-12 20:31:01.876401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:17.686 [2024-12-12 20:31:01.876411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:17.686 [2024-12-12 20:31:01.876429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:17.686 [2024-12-12 20:31:01.876438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:17.686 [2024-12-12 20:31:01.876446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:17.686 [2024-12-12 20:31:01.876454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:17.686 [2024-12-12 20:31:01.876461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:17.686 [2024-12-12 20:31:01.876470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:17.686 [2024-12-12 20:31:01.876478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:17.686 [2024-12-12 20:31:01.876487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:17.686 [2024-12-12 20:31:01.876495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:17.686 [2024-12-12 20:31:01.876504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:17.686 [2024-12-12 20:31:01.876511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:17.686 [2024-12-12 20:31:01.876523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:17.687 [2024-12-12 20:31:01.876530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:17.687 [2024-12-12 20:31:01.876539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:17.687 [2024-12-12 20:31:01.876546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:17.687 [2024-12-12 20:31:01.876555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:17.687 [2024-12-12 20:31:01.876562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:17.687 [2024-12-12 20:31:01.876571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:17.687 [2024-12-12 20:31:01.876578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:17.687 [2024-12-12 20:31:01.876587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:17.687 [2024-12-12 20:31:01.876594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:17.687 [2024-12-12 20:31:01.876603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:17.687 [2024-12-12 20:31:01.876610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:17.687 [2024-12-12 20:31:01.876619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:17.687 [2024-12-12 20:31:01.876626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:17.687 [2024-12-12 20:31:01.876635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:17.687 [2024-12-12 20:31:01.876642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:17.687 [2024-12-12 20:31:01.876652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:17.687 [2024-12-12 20:31:01.876660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:17.687 [2024-12-12 20:31:01.876669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:17.687 [2024-12-12 20:31:01.876676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:17.687 [2024-12-12 20:31:01.876684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:17.687 [2024-12-12 20:31:01.876691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:17.687 [2024-12-12 20:31:01.876700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:17.687 [2024-12-12 20:31:01.876707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:17.687 [2024-12-12 20:31:01.876715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:17.687 [2024-12-12 20:31:01.876723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:17.687 [2024-12-12 20:31:01.876732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:17.687 [2024-12-12 20:31:01.876739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:17.687 [2024-12-12 20:31:01.876750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:17.687 [2024-12-12 20:31:01.876757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:17.687 [2024-12-12 20:31:01.876766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:17.687 [2024-12-12 20:31:01.876787] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:17.687 [2024-12-12 20:31:01.876800] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6ce9d3d9-3dbe-48b1-9531-60bd0b669e2c 00:20:17.687 [2024-12-12 20:31:01.876811] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:17.687 [2024-12-12 20:31:01.876819] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:17.687 [2024-12-12 20:31:01.876826] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:17.687 [2024-12-12 20:31:01.876835] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:17.687 [2024-12-12 20:31:01.876842] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:17.687 [2024-12-12 20:31:01.876851] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:17.687 [2024-12-12 20:31:01.876858] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:17.687 [2024-12-12 20:31:01.876866] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:17.687 [2024-12-12 20:31:01.876872] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:17.687 [2024-12-12 20:31:01.876880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.687 [2024-12-12 20:31:01.876887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:17.687 [2024-12-12 20:31:01.876897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.968 ms 00:20:17.687 [2024-12-12 20:31:01.876904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.687 [2024-12-12 20:31:01.889835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.687 [2024-12-12 20:31:01.889986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:17.687 [2024-12-12 20:31:01.890055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.908 ms 00:20:17.687 [2024-12-12 20:31:01.890086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.687 [2024-12-12 20:31:01.890482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.687 [2024-12-12 20:31:01.890577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:17.687 [2024-12-12 20:31:01.890639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.335 ms 00:20:17.687 [2024-12-12 20:31:01.890661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.946 [2024-12-12 20:31:01.934703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.946 [2024-12-12 20:31:01.934835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:17.946 [2024-12-12 20:31:01.934888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.946 [2024-12-12 20:31:01.934910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.946 [2024-12-12 20:31:01.935028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.946 [2024-12-12 20:31:01.935053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:17.946 [2024-12-12 20:31:01.935076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.946 [2024-12-12 20:31:01.935094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.946 [2024-12-12 20:31:01.935153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.946 [2024-12-12 20:31:01.935284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:17.946 [2024-12-12 20:31:01.935350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.946 [2024-12-12 20:31:01.935368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.946 [2024-12-12 20:31:01.935399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.946 [2024-12-12 20:31:01.935440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:17.946 [2024-12-12 20:31:01.935465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.946 [2024-12-12 20:31:01.935486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.946 [2024-12-12 20:31:02.011179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.946 [2024-12-12 20:31:02.011356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:17.946 [2024-12-12 20:31:02.011508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.946 [2024-12-12 20:31:02.011541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.946 [2024-12-12 20:31:02.073844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.946 [2024-12-12 20:31:02.074023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:17.946 [2024-12-12 20:31:02.074075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.946 [2024-12-12 20:31:02.074116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.946 [2024-12-12 20:31:02.074212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.946 [2024-12-12 20:31:02.074236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:17.946 [2024-12-12 20:31:02.074259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.946 [2024-12-12 20:31:02.074278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.946 [2024-12-12 20:31:02.074318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.946 [2024-12-12 20:31:02.074337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:17.946 [2024-12-12 20:31:02.074430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.946 [2024-12-12 20:31:02.074454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.946 [2024-12-12 20:31:02.074569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.946 [2024-12-12 20:31:02.074649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:17.946 [2024-12-12 20:31:02.074728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.946 [2024-12-12 20:31:02.074751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.946 [2024-12-12 20:31:02.074802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.946 [2024-12-12 20:31:02.074825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:17.946 [2024-12-12 20:31:02.074845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.946 [2024-12-12 20:31:02.074903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.946 [2024-12-12 20:31:02.074961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.946 [2024-12-12 20:31:02.074982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:17.946 [2024-12-12 20:31:02.075004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.946 [2024-12-12 20:31:02.075053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.946 [2024-12-12 20:31:02.075113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.946 [2024-12-12 20:31:02.075137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:17.946 [2024-12-12 20:31:02.075157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.946 [2024-12-12 20:31:02.075175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.946 [2024-12-12 20:31:02.075347] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 268.555 ms, result 0 00:20:18.513 20:31:02 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:18.513 20:31:02 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:18.772 [2024-12-12 20:31:02.786241] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:20:18.772 [2024-12-12 20:31:02.786364] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78400 ] 00:20:18.772 [2024-12-12 20:31:02.956203] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:19.029 [2024-12-12 20:31:03.075233] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:19.288 [2024-12-12 20:31:03.302767] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:19.288 [2024-12-12 20:31:03.302827] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:19.288 [2024-12-12 20:31:03.454686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.288 [2024-12-12 20:31:03.454730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:19.288 [2024-12-12 20:31:03.454741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:19.288 [2024-12-12 20:31:03.454748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.288 [2024-12-12 20:31:03.456891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.288 [2024-12-12 20:31:03.456922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:19.288 [2024-12-12 20:31:03.456930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.131 ms 00:20:19.288 [2024-12-12 20:31:03.456936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.288 [2024-12-12 20:31:03.456994] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:19.288 [2024-12-12 20:31:03.457520] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:19.288 [2024-12-12 20:31:03.457536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.288 [2024-12-12 20:31:03.457542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:19.288 [2024-12-12 20:31:03.457550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.547 ms 00:20:19.288 [2024-12-12 20:31:03.457555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.288 [2024-12-12 20:31:03.459109] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:19.288 [2024-12-12 20:31:03.469081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.288 [2024-12-12 20:31:03.469114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:19.288 [2024-12-12 20:31:03.469124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.973 ms 00:20:19.288 [2024-12-12 20:31:03.469131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.288 [2024-12-12 20:31:03.469209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.288 [2024-12-12 20:31:03.469218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:19.288 [2024-12-12 20:31:03.469225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:20:19.288 [2024-12-12 20:31:03.469231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.288 [2024-12-12 20:31:03.474243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.288 [2024-12-12 20:31:03.474271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:19.288 [2024-12-12 20:31:03.474279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.977 ms 00:20:19.288 [2024-12-12 20:31:03.474285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.288 [2024-12-12 20:31:03.474369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.288 [2024-12-12 20:31:03.474377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:19.288 [2024-12-12 20:31:03.474387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:20:19.288 [2024-12-12 20:31:03.474394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.288 [2024-12-12 20:31:03.474446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.288 [2024-12-12 20:31:03.474459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:19.288 [2024-12-12 20:31:03.474469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:19.288 [2024-12-12 20:31:03.474477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.288 [2024-12-12 20:31:03.474501] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:19.288 [2024-12-12 20:31:03.477195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.288 [2024-12-12 20:31:03.477334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:19.288 [2024-12-12 20:31:03.477348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.700 ms 00:20:19.288 [2024-12-12 20:31:03.477354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.288 [2024-12-12 20:31:03.477387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.288 [2024-12-12 20:31:03.477395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:19.288 [2024-12-12 20:31:03.477401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:19.289 [2024-12-12 20:31:03.477407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.289 [2024-12-12 20:31:03.477437] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:19.289 [2024-12-12 20:31:03.477454] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:19.289 [2024-12-12 20:31:03.477482] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:19.289 [2024-12-12 20:31:03.477494] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:19.289 [2024-12-12 20:31:03.477575] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:19.289 [2024-12-12 20:31:03.477583] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:19.289 [2024-12-12 20:31:03.477591] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:19.289 [2024-12-12 20:31:03.477601] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:19.289 [2024-12-12 20:31:03.477608] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:19.289 [2024-12-12 20:31:03.477615] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:19.289 [2024-12-12 20:31:03.477620] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:19.289 [2024-12-12 20:31:03.477626] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:19.289 [2024-12-12 20:31:03.477632] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:19.289 [2024-12-12 20:31:03.477637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.289 [2024-12-12 20:31:03.477644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:19.289 [2024-12-12 20:31:03.477650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.202 ms 00:20:19.289 [2024-12-12 20:31:03.477655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.289 [2024-12-12 20:31:03.477724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.289 [2024-12-12 20:31:03.477732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:19.289 [2024-12-12 20:31:03.477738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:20:19.289 [2024-12-12 20:31:03.477744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.289 [2024-12-12 20:31:03.477819] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:19.289 [2024-12-12 20:31:03.477827] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:19.289 [2024-12-12 20:31:03.477833] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:19.289 [2024-12-12 20:31:03.477839] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:19.289 [2024-12-12 20:31:03.477845] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:19.289 [2024-12-12 20:31:03.477850] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:19.289 [2024-12-12 20:31:03.477855] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:19.289 [2024-12-12 20:31:03.477861] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:19.289 [2024-12-12 20:31:03.477867] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:19.289 [2024-12-12 20:31:03.477872] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:19.289 [2024-12-12 20:31:03.477877] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:19.289 [2024-12-12 20:31:03.477888] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:19.289 [2024-12-12 20:31:03.477893] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:19.289 [2024-12-12 20:31:03.477898] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:19.289 [2024-12-12 20:31:03.477904] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:19.289 [2024-12-12 20:31:03.477910] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:19.289 [2024-12-12 20:31:03.477915] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:19.289 [2024-12-12 20:31:03.477920] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:19.289 [2024-12-12 20:31:03.477925] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:19.289 [2024-12-12 20:31:03.477930] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:19.289 [2024-12-12 20:31:03.477936] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:19.289 [2024-12-12 20:31:03.477941] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:19.289 [2024-12-12 20:31:03.477946] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:19.289 [2024-12-12 20:31:03.477951] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:19.289 [2024-12-12 20:31:03.477956] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:19.289 [2024-12-12 20:31:03.477961] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:19.289 [2024-12-12 20:31:03.477966] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:19.289 [2024-12-12 20:31:03.477971] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:19.289 [2024-12-12 20:31:03.477977] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:19.289 [2024-12-12 20:31:03.477982] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:19.289 [2024-12-12 20:31:03.477986] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:19.289 [2024-12-12 20:31:03.477991] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:19.289 [2024-12-12 20:31:03.477997] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:19.289 [2024-12-12 20:31:03.478002] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:19.289 [2024-12-12 20:31:03.478007] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:19.289 [2024-12-12 20:31:03.478012] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:19.289 [2024-12-12 20:31:03.478016] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:19.289 [2024-12-12 20:31:03.478021] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:19.289 [2024-12-12 20:31:03.478026] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:19.289 [2024-12-12 20:31:03.478031] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:19.289 [2024-12-12 20:31:03.478036] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:19.289 [2024-12-12 20:31:03.478041] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:19.289 [2024-12-12 20:31:03.478046] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:19.289 [2024-12-12 20:31:03.478052] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:19.289 [2024-12-12 20:31:03.478058] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:19.289 [2024-12-12 20:31:03.478065] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:19.289 [2024-12-12 20:31:03.478071] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:19.289 [2024-12-12 20:31:03.478078] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:19.289 [2024-12-12 20:31:03.478084] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:19.289 [2024-12-12 20:31:03.478089] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:19.289 [2024-12-12 20:31:03.478094] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:19.289 [2024-12-12 20:31:03.478099] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:19.289 [2024-12-12 20:31:03.478104] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:19.289 [2024-12-12 20:31:03.478110] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:19.289 [2024-12-12 20:31:03.478117] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:19.289 [2024-12-12 20:31:03.478123] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:19.289 [2024-12-12 20:31:03.478129] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:19.289 [2024-12-12 20:31:03.478135] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:19.289 [2024-12-12 20:31:03.478140] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:19.289 [2024-12-12 20:31:03.478145] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:19.289 [2024-12-12 20:31:03.478151] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:19.289 [2024-12-12 20:31:03.478157] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:19.289 [2024-12-12 20:31:03.478162] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:19.289 [2024-12-12 20:31:03.478168] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:19.289 [2024-12-12 20:31:03.478173] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:19.289 [2024-12-12 20:31:03.478186] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:19.289 [2024-12-12 20:31:03.478192] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:19.289 [2024-12-12 20:31:03.478198] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:19.289 [2024-12-12 20:31:03.478204] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:19.289 [2024-12-12 20:31:03.478209] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:19.289 [2024-12-12 20:31:03.478216] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:19.289 [2024-12-12 20:31:03.478222] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:19.289 [2024-12-12 20:31:03.478228] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:19.289 [2024-12-12 20:31:03.478234] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:19.289 [2024-12-12 20:31:03.478240] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:19.289 [2024-12-12 20:31:03.478246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.289 [2024-12-12 20:31:03.478254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:19.289 [2024-12-12 20:31:03.478260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.481 ms 00:20:19.289 [2024-12-12 20:31:03.478266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.290 [2024-12-12 20:31:03.500931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.290 [2024-12-12 20:31:03.501037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:19.290 [2024-12-12 20:31:03.501095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.607 ms 00:20:19.290 [2024-12-12 20:31:03.501115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.290 [2024-12-12 20:31:03.501231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.290 [2024-12-12 20:31:03.501251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:19.290 [2024-12-12 20:31:03.501267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:20:19.290 [2024-12-12 20:31:03.501282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.618 [2024-12-12 20:31:03.548017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.618 [2024-12-12 20:31:03.548147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:19.618 [2024-12-12 20:31:03.548209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.706 ms 00:20:19.618 [2024-12-12 20:31:03.548230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.618 [2024-12-12 20:31:03.548404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.618 [2024-12-12 20:31:03.548486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:19.618 [2024-12-12 20:31:03.548542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:19.618 [2024-12-12 20:31:03.548562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.618 [2024-12-12 20:31:03.548881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.618 [2024-12-12 20:31:03.548959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:19.618 [2024-12-12 20:31:03.549050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:20:19.618 [2024-12-12 20:31:03.549111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.618 [2024-12-12 20:31:03.549234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.618 [2024-12-12 20:31:03.549271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:19.618 [2024-12-12 20:31:03.549319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:20:19.618 [2024-12-12 20:31:03.549336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.618 [2024-12-12 20:31:03.560752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.618 [2024-12-12 20:31:03.560841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:19.618 [2024-12-12 20:31:03.560881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.388 ms 00:20:19.618 [2024-12-12 20:31:03.560898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.618 [2024-12-12 20:31:03.570793] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:19.619 [2024-12-12 20:31:03.570896] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:19.619 [2024-12-12 20:31:03.570946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.619 [2024-12-12 20:31:03.570977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:19.619 [2024-12-12 20:31:03.570996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.950 ms 00:20:19.619 [2024-12-12 20:31:03.571076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.619 [2024-12-12 20:31:03.590205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.619 [2024-12-12 20:31:03.590310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:19.619 [2024-12-12 20:31:03.590355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.042 ms 00:20:19.619 [2024-12-12 20:31:03.590373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.619 [2024-12-12 20:31:03.599503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.619 [2024-12-12 20:31:03.599600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:19.619 [2024-12-12 20:31:03.599643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.062 ms 00:20:19.619 [2024-12-12 20:31:03.599660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.619 [2024-12-12 20:31:03.608388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.619 [2024-12-12 20:31:03.608488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:19.619 [2024-12-12 20:31:03.608530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.678 ms 00:20:19.619 [2024-12-12 20:31:03.608546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.619 [2024-12-12 20:31:03.609032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.619 [2024-12-12 20:31:03.609106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:19.619 [2024-12-12 20:31:03.609146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.416 ms 00:20:19.619 [2024-12-12 20:31:03.609163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.619 [2024-12-12 20:31:03.654607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.619 [2024-12-12 20:31:03.654752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:19.619 [2024-12-12 20:31:03.654797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.409 ms 00:20:19.619 [2024-12-12 20:31:03.654815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.619 [2024-12-12 20:31:03.662952] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:19.619 [2024-12-12 20:31:03.675914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.619 [2024-12-12 20:31:03.676023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:19.619 [2024-12-12 20:31:03.676103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.003 ms 00:20:19.619 [2024-12-12 20:31:03.676127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.619 [2024-12-12 20:31:03.676213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.619 [2024-12-12 20:31:03.676234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:19.619 [2024-12-12 20:31:03.676251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:19.619 [2024-12-12 20:31:03.676299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.619 [2024-12-12 20:31:03.676354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.619 [2024-12-12 20:31:03.676372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:19.619 [2024-12-12 20:31:03.676388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:20:19.619 [2024-12-12 20:31:03.676406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.619 [2024-12-12 20:31:03.676511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.619 [2024-12-12 20:31:03.676529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:19.619 [2024-12-12 20:31:03.676544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:19.619 [2024-12-12 20:31:03.676559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.619 [2024-12-12 20:31:03.676627] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:19.619 [2024-12-12 20:31:03.676751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.619 [2024-12-12 20:31:03.676779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:19.619 [2024-12-12 20:31:03.676820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:20:19.619 [2024-12-12 20:31:03.676837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.619 [2024-12-12 20:31:03.695534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.619 [2024-12-12 20:31:03.695629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:19.619 [2024-12-12 20:31:03.695670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.660 ms 00:20:19.619 [2024-12-12 20:31:03.695688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.619 [2024-12-12 20:31:03.695763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.619 [2024-12-12 20:31:03.695854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:19.619 [2024-12-12 20:31:03.695874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:20:19.619 [2024-12-12 20:31:03.695889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.619 [2024-12-12 20:31:03.696639] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:19.619 [2024-12-12 20:31:03.699164] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 241.700 ms, result 0 00:20:19.619 [2024-12-12 20:31:03.699739] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:19.619 [2024-12-12 20:31:03.714754] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:20.553  [2024-12-12T20:31:06.153Z] Copying: 47/256 [MB] (47 MBps) [2024-12-12T20:31:06.719Z] Copying: 90/256 [MB] (42 MBps) [2024-12-12T20:31:08.091Z] Copying: 133/256 [MB] (43 MBps) [2024-12-12T20:31:09.026Z] Copying: 175/256 [MB] (42 MBps) [2024-12-12T20:31:09.593Z] Copying: 219/256 [MB] (43 MBps) [2024-12-12T20:31:09.593Z] Copying: 256/256 [MB] (average 44 MBps)[2024-12-12 20:31:09.464483] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:25.365 [2024-12-12 20:31:09.473719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.365 [2024-12-12 20:31:09.473869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:25.365 [2024-12-12 20:31:09.473892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:25.365 [2024-12-12 20:31:09.473900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.365 [2024-12-12 20:31:09.473924] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:25.365 [2024-12-12 20:31:09.476518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.365 [2024-12-12 20:31:09.476546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:25.365 [2024-12-12 20:31:09.476556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.582 ms 00:20:25.365 [2024-12-12 20:31:09.476565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.365 [2024-12-12 20:31:09.476815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.365 [2024-12-12 20:31:09.476825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:25.365 [2024-12-12 20:31:09.476833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.231 ms 00:20:25.365 [2024-12-12 20:31:09.476839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.365 [2024-12-12 20:31:09.480543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.365 [2024-12-12 20:31:09.480564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:25.365 [2024-12-12 20:31:09.480573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.686 ms 00:20:25.365 [2024-12-12 20:31:09.480581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.365 [2024-12-12 20:31:09.487424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.365 [2024-12-12 20:31:09.487534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:25.365 [2024-12-12 20:31:09.487549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.825 ms 00:20:25.365 [2024-12-12 20:31:09.487556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.365 [2024-12-12 20:31:09.511384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.365 [2024-12-12 20:31:09.511507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:25.365 [2024-12-12 20:31:09.511523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.778 ms 00:20:25.365 [2024-12-12 20:31:09.511530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.365 [2024-12-12 20:31:09.525467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.365 [2024-12-12 20:31:09.525500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:25.365 [2024-12-12 20:31:09.525516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.916 ms 00:20:25.365 [2024-12-12 20:31:09.525525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.365 [2024-12-12 20:31:09.525644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.365 [2024-12-12 20:31:09.525653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:25.365 [2024-12-12 20:31:09.525669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:20:25.365 [2024-12-12 20:31:09.525676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.365 [2024-12-12 20:31:09.549406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.365 [2024-12-12 20:31:09.549446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:25.365 [2024-12-12 20:31:09.549456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.713 ms 00:20:25.365 [2024-12-12 20:31:09.549463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.365 [2024-12-12 20:31:09.572580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.365 [2024-12-12 20:31:09.572610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:25.365 [2024-12-12 20:31:09.572620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.096 ms 00:20:25.365 [2024-12-12 20:31:09.572626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.624 [2024-12-12 20:31:09.595145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.624 [2024-12-12 20:31:09.595266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:25.624 [2024-12-12 20:31:09.595281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.497 ms 00:20:25.624 [2024-12-12 20:31:09.595289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.624 [2024-12-12 20:31:09.617733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.624 [2024-12-12 20:31:09.617846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:25.624 [2024-12-12 20:31:09.617860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.396 ms 00:20:25.624 [2024-12-12 20:31:09.617867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.624 [2024-12-12 20:31:09.617887] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:25.624 [2024-12-12 20:31:09.617900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:25.624 [2024-12-12 20:31:09.617910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:25.624 [2024-12-12 20:31:09.617918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:25.624 [2024-12-12 20:31:09.617926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:25.624 [2024-12-12 20:31:09.617934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:25.625 [2024-12-12 20:31:09.617941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:25.625 [2024-12-12 20:31:09.617949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:25.625 [2024-12-12 20:31:09.617956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:25.625 [2024-12-12 20:31:09.617964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:25.625 [2024-12-12 20:31:09.617971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:25.625 [2024-12-12 20:31:09.617979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:25.625 [2024-12-12 20:31:09.617986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:25.625 [2024-12-12 20:31:09.617993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:25.625 [2024-12-12 20:31:09.618001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:25.625 [2024-12-12 20:31:09.618008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:25.625 [2024-12-12 20:31:09.618016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:25.625 [2024-12-12 20:31:09.618023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:25.625 [2024-12-12 20:31:09.618031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:25.625 [2024-12-12 20:31:09.618038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:25.625 [2024-12-12 20:31:09.618045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:25.625 [2024-12-12 20:31:09.618053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:25.625 [2024-12-12 20:31:09.618060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:25.625 [2024-12-12 20:31:09.618068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:25.625 [2024-12-12 20:31:09.618075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:25.625 [2024-12-12 20:31:09.618082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:25.625 [2024-12-12 20:31:09.618089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:25.625 [2024-12-12 20:31:09.618096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:25.625 [2024-12-12 20:31:09.618105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:25.625 [2024-12-12 20:31:09.618112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:25.625 [2024-12-12 20:31:09.618121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:25.625 [2024-12-12 20:31:09.618129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:25.625 [2024-12-12 20:31:09.618136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:25.625 [2024-12-12 20:31:09.618144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:25.625 [2024-12-12 20:31:09.618151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:25.625 [2024-12-12 20:31:09.618159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:25.625 [2024-12-12 20:31:09.618166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:25.625 [2024-12-12 20:31:09.618173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:25.625 [2024-12-12 20:31:09.618180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:25.625 [2024-12-12 20:31:09.618188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:25.625 [2024-12-12 20:31:09.618195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:25.625 [2024-12-12 20:31:09.618202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:25.625 [2024-12-12 20:31:09.618209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:25.625 [2024-12-12 20:31:09.618217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:25.625 [2024-12-12 20:31:09.618224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:25.625 [2024-12-12 20:31:09.618231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:25.625 [2024-12-12 20:31:09.618239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:25.625 [2024-12-12 20:31:09.618246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:25.625 [2024-12-12 20:31:09.618254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:25.625 [2024-12-12 20:31:09.618261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:25.625 [2024-12-12 20:31:09.618268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:25.625 [2024-12-12 20:31:09.618275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:25.625 [2024-12-12 20:31:09.618283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:25.625 [2024-12-12 20:31:09.618290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:25.625 [2024-12-12 20:31:09.618297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:25.625 [2024-12-12 20:31:09.618304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:25.625 [2024-12-12 20:31:09.618312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:25.625 [2024-12-12 20:31:09.618319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:25.625 [2024-12-12 20:31:09.618327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:25.625 [2024-12-12 20:31:09.618334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:25.625 [2024-12-12 20:31:09.618341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:25.625 [2024-12-12 20:31:09.618349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:25.625 [2024-12-12 20:31:09.618357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:25.625 [2024-12-12 20:31:09.618364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:25.625 [2024-12-12 20:31:09.618372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:25.625 [2024-12-12 20:31:09.618379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:25.625 [2024-12-12 20:31:09.618386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:25.625 [2024-12-12 20:31:09.618394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:25.625 [2024-12-12 20:31:09.618401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:25.625 [2024-12-12 20:31:09.618408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:25.625 [2024-12-12 20:31:09.618434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:25.625 [2024-12-12 20:31:09.618442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:25.625 [2024-12-12 20:31:09.618450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:25.625 [2024-12-12 20:31:09.618457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:25.625 [2024-12-12 20:31:09.618464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:25.625 [2024-12-12 20:31:09.618472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:25.625 [2024-12-12 20:31:09.618479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:25.625 [2024-12-12 20:31:09.618487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:25.625 [2024-12-12 20:31:09.618495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:25.625 [2024-12-12 20:31:09.618502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:25.625 [2024-12-12 20:31:09.618509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:25.625 [2024-12-12 20:31:09.618516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:25.625 [2024-12-12 20:31:09.618524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:25.625 [2024-12-12 20:31:09.618531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:25.625 [2024-12-12 20:31:09.618538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:25.625 [2024-12-12 20:31:09.618545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:25.625 [2024-12-12 20:31:09.618553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:25.625 [2024-12-12 20:31:09.618561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:25.625 [2024-12-12 20:31:09.618568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:25.625 [2024-12-12 20:31:09.618575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:25.625 [2024-12-12 20:31:09.618591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:25.625 [2024-12-12 20:31:09.618598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:25.625 [2024-12-12 20:31:09.618605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:25.625 [2024-12-12 20:31:09.618613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:25.625 [2024-12-12 20:31:09.618630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:25.625 [2024-12-12 20:31:09.618637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:25.625 [2024-12-12 20:31:09.618646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:25.625 [2024-12-12 20:31:09.618653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:25.625 [2024-12-12 20:31:09.618660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:25.625 [2024-12-12 20:31:09.618668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:25.625 [2024-12-12 20:31:09.618675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:25.625 [2024-12-12 20:31:09.618690] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:25.625 [2024-12-12 20:31:09.618698] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6ce9d3d9-3dbe-48b1-9531-60bd0b669e2c 00:20:25.625 [2024-12-12 20:31:09.618706] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:25.625 [2024-12-12 20:31:09.618713] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:25.625 [2024-12-12 20:31:09.618720] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:25.625 [2024-12-12 20:31:09.618727] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:25.625 [2024-12-12 20:31:09.618734] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:25.625 [2024-12-12 20:31:09.618741] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:25.625 [2024-12-12 20:31:09.618750] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:25.625 [2024-12-12 20:31:09.618756] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:25.625 [2024-12-12 20:31:09.618762] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:25.625 [2024-12-12 20:31:09.618769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.625 [2024-12-12 20:31:09.618776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:25.625 [2024-12-12 20:31:09.618785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.882 ms 00:20:25.625 [2024-12-12 20:31:09.618791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.625 [2024-12-12 20:31:09.631133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.625 [2024-12-12 20:31:09.631161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:25.625 [2024-12-12 20:31:09.631171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.324 ms 00:20:25.625 [2024-12-12 20:31:09.631179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.625 [2024-12-12 20:31:09.631553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.625 [2024-12-12 20:31:09.631562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:25.625 [2024-12-12 20:31:09.631571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.340 ms 00:20:25.625 [2024-12-12 20:31:09.631577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.625 [2024-12-12 20:31:09.666249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:25.625 [2024-12-12 20:31:09.666280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:25.625 [2024-12-12 20:31:09.666290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:25.625 [2024-12-12 20:31:09.666301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.625 [2024-12-12 20:31:09.666389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:25.625 [2024-12-12 20:31:09.666398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:25.625 [2024-12-12 20:31:09.666406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:25.625 [2024-12-12 20:31:09.666431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.625 [2024-12-12 20:31:09.666472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:25.625 [2024-12-12 20:31:09.666480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:25.625 [2024-12-12 20:31:09.666488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:25.625 [2024-12-12 20:31:09.666495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.625 [2024-12-12 20:31:09.666515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:25.625 [2024-12-12 20:31:09.666523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:25.625 [2024-12-12 20:31:09.666530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:25.625 [2024-12-12 20:31:09.666537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.625 [2024-12-12 20:31:09.742260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:25.625 [2024-12-12 20:31:09.742305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:25.625 [2024-12-12 20:31:09.742315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:25.625 [2024-12-12 20:31:09.742323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.625 [2024-12-12 20:31:09.803969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:25.625 [2024-12-12 20:31:09.804013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:25.625 [2024-12-12 20:31:09.804023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:25.625 [2024-12-12 20:31:09.804031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.625 [2024-12-12 20:31:09.804093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:25.625 [2024-12-12 20:31:09.804101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:25.625 [2024-12-12 20:31:09.804109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:25.625 [2024-12-12 20:31:09.804117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.625 [2024-12-12 20:31:09.804144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:25.625 [2024-12-12 20:31:09.804156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:25.625 [2024-12-12 20:31:09.804163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:25.625 [2024-12-12 20:31:09.804171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.625 [2024-12-12 20:31:09.804253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:25.625 [2024-12-12 20:31:09.804263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:25.625 [2024-12-12 20:31:09.804271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:25.625 [2024-12-12 20:31:09.804278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.625 [2024-12-12 20:31:09.804306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:25.625 [2024-12-12 20:31:09.804315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:25.625 [2024-12-12 20:31:09.804325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:25.625 [2024-12-12 20:31:09.804332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.625 [2024-12-12 20:31:09.804367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:25.625 [2024-12-12 20:31:09.804376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:25.625 [2024-12-12 20:31:09.804384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:25.625 [2024-12-12 20:31:09.804391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.625 [2024-12-12 20:31:09.804456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:25.625 [2024-12-12 20:31:09.804470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:25.625 [2024-12-12 20:31:09.804477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:25.625 [2024-12-12 20:31:09.804484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.625 [2024-12-12 20:31:09.804612] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 330.868 ms, result 0 00:20:26.559 00:20:26.559 00:20:26.559 20:31:10 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:20:26.559 20:31:10 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:27.124 20:31:11 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:27.124 [2024-12-12 20:31:11.130995] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:20:27.124 [2024-12-12 20:31:11.131151] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78492 ] 00:20:27.124 [2024-12-12 20:31:11.301225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:27.382 [2024-12-12 20:31:11.400251] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:27.641 [2024-12-12 20:31:11.660864] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:27.641 [2024-12-12 20:31:11.660927] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:27.641 [2024-12-12 20:31:11.815244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.641 [2024-12-12 20:31:11.815295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:27.641 [2024-12-12 20:31:11.815308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:27.641 [2024-12-12 20:31:11.815317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.641 [2024-12-12 20:31:11.817976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.641 [2024-12-12 20:31:11.818013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:27.641 [2024-12-12 20:31:11.818023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.642 ms 00:20:27.641 [2024-12-12 20:31:11.818031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.641 [2024-12-12 20:31:11.818099] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:27.641 [2024-12-12 20:31:11.818839] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:27.641 [2024-12-12 20:31:11.818967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.641 [2024-12-12 20:31:11.818979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:27.641 [2024-12-12 20:31:11.818988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.875 ms 00:20:27.641 [2024-12-12 20:31:11.818996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.641 [2024-12-12 20:31:11.820188] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:27.641 [2024-12-12 20:31:11.832809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.641 [2024-12-12 20:31:11.832843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:27.641 [2024-12-12 20:31:11.832854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.621 ms 00:20:27.641 [2024-12-12 20:31:11.832862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.641 [2024-12-12 20:31:11.832951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.641 [2024-12-12 20:31:11.832962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:27.641 [2024-12-12 20:31:11.832971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:20:27.641 [2024-12-12 20:31:11.832978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.641 [2024-12-12 20:31:11.838270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.641 [2024-12-12 20:31:11.838302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:27.641 [2024-12-12 20:31:11.838312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.251 ms 00:20:27.641 [2024-12-12 20:31:11.838319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.641 [2024-12-12 20:31:11.838404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.641 [2024-12-12 20:31:11.838431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:27.641 [2024-12-12 20:31:11.838439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:20:27.641 [2024-12-12 20:31:11.838447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.641 [2024-12-12 20:31:11.838474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.641 [2024-12-12 20:31:11.838482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:27.641 [2024-12-12 20:31:11.838490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:27.641 [2024-12-12 20:31:11.838497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.641 [2024-12-12 20:31:11.838517] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:27.641 [2024-12-12 20:31:11.841768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.641 [2024-12-12 20:31:11.841795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:27.641 [2024-12-12 20:31:11.841804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.255 ms 00:20:27.641 [2024-12-12 20:31:11.841812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.641 [2024-12-12 20:31:11.841848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.641 [2024-12-12 20:31:11.841857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:27.641 [2024-12-12 20:31:11.841865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:27.641 [2024-12-12 20:31:11.841872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.641 [2024-12-12 20:31:11.841891] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:27.641 [2024-12-12 20:31:11.841909] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:27.641 [2024-12-12 20:31:11.841943] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:27.641 [2024-12-12 20:31:11.841958] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:27.641 [2024-12-12 20:31:11.842059] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:27.641 [2024-12-12 20:31:11.842069] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:27.641 [2024-12-12 20:31:11.842079] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:27.641 [2024-12-12 20:31:11.842090] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:27.641 [2024-12-12 20:31:11.842099] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:27.641 [2024-12-12 20:31:11.842106] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:27.641 [2024-12-12 20:31:11.842113] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:27.641 [2024-12-12 20:31:11.842120] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:27.641 [2024-12-12 20:31:11.842127] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:27.641 [2024-12-12 20:31:11.842134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.641 [2024-12-12 20:31:11.842142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:27.641 [2024-12-12 20:31:11.842149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.245 ms 00:20:27.641 [2024-12-12 20:31:11.842156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.641 [2024-12-12 20:31:11.842243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.641 [2024-12-12 20:31:11.842253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:27.641 [2024-12-12 20:31:11.842260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:27.641 [2024-12-12 20:31:11.842267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.641 [2024-12-12 20:31:11.842364] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:27.641 [2024-12-12 20:31:11.842373] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:27.641 [2024-12-12 20:31:11.842380] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:27.641 [2024-12-12 20:31:11.842387] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:27.641 [2024-12-12 20:31:11.842395] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:27.641 [2024-12-12 20:31:11.842401] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:27.642 [2024-12-12 20:31:11.842408] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:27.642 [2024-12-12 20:31:11.842434] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:27.642 [2024-12-12 20:31:11.842442] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:27.642 [2024-12-12 20:31:11.842449] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:27.642 [2024-12-12 20:31:11.842455] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:27.642 [2024-12-12 20:31:11.842468] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:27.642 [2024-12-12 20:31:11.842475] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:27.642 [2024-12-12 20:31:11.842481] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:27.642 [2024-12-12 20:31:11.842487] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:27.642 [2024-12-12 20:31:11.842494] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:27.642 [2024-12-12 20:31:11.842500] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:27.642 [2024-12-12 20:31:11.842507] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:27.642 [2024-12-12 20:31:11.842514] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:27.642 [2024-12-12 20:31:11.842522] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:27.642 [2024-12-12 20:31:11.842528] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:27.642 [2024-12-12 20:31:11.842535] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:27.642 [2024-12-12 20:31:11.842541] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:27.642 [2024-12-12 20:31:11.842548] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:27.642 [2024-12-12 20:31:11.842554] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:27.642 [2024-12-12 20:31:11.842562] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:27.642 [2024-12-12 20:31:11.842568] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:27.642 [2024-12-12 20:31:11.842574] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:27.642 [2024-12-12 20:31:11.842581] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:27.642 [2024-12-12 20:31:11.842587] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:27.642 [2024-12-12 20:31:11.842593] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:27.642 [2024-12-12 20:31:11.842600] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:27.642 [2024-12-12 20:31:11.842606] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:27.642 [2024-12-12 20:31:11.842613] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:27.642 [2024-12-12 20:31:11.842619] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:27.642 [2024-12-12 20:31:11.842625] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:27.642 [2024-12-12 20:31:11.842632] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:27.642 [2024-12-12 20:31:11.842638] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:27.642 [2024-12-12 20:31:11.842645] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:27.642 [2024-12-12 20:31:11.842651] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:27.642 [2024-12-12 20:31:11.842657] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:27.642 [2024-12-12 20:31:11.842664] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:27.642 [2024-12-12 20:31:11.842670] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:27.642 [2024-12-12 20:31:11.842677] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:27.642 [2024-12-12 20:31:11.842684] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:27.642 [2024-12-12 20:31:11.842694] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:27.642 [2024-12-12 20:31:11.842701] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:27.642 [2024-12-12 20:31:11.842708] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:27.642 [2024-12-12 20:31:11.842715] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:27.642 [2024-12-12 20:31:11.842722] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:27.642 [2024-12-12 20:31:11.842736] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:27.642 [2024-12-12 20:31:11.842743] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:27.642 [2024-12-12 20:31:11.842750] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:27.642 [2024-12-12 20:31:11.842758] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:27.642 [2024-12-12 20:31:11.842767] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:27.642 [2024-12-12 20:31:11.842775] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:27.642 [2024-12-12 20:31:11.842783] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:27.642 [2024-12-12 20:31:11.842790] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:27.642 [2024-12-12 20:31:11.842797] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:27.642 [2024-12-12 20:31:11.842804] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:27.642 [2024-12-12 20:31:11.842811] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:27.642 [2024-12-12 20:31:11.842818] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:27.642 [2024-12-12 20:31:11.842825] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:27.642 [2024-12-12 20:31:11.842832] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:27.642 [2024-12-12 20:31:11.842839] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:27.642 [2024-12-12 20:31:11.842845] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:27.642 [2024-12-12 20:31:11.842852] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:27.642 [2024-12-12 20:31:11.842859] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:27.642 [2024-12-12 20:31:11.842866] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:27.642 [2024-12-12 20:31:11.842873] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:27.642 [2024-12-12 20:31:11.842880] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:27.642 [2024-12-12 20:31:11.842889] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:27.642 [2024-12-12 20:31:11.842896] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:27.642 [2024-12-12 20:31:11.842904] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:27.642 [2024-12-12 20:31:11.842911] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:27.642 [2024-12-12 20:31:11.842919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.642 [2024-12-12 20:31:11.842928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:27.642 [2024-12-12 20:31:11.842935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.623 ms 00:20:27.642 [2024-12-12 20:31:11.842942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.900 [2024-12-12 20:31:11.869457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.900 [2024-12-12 20:31:11.869488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:27.900 [2024-12-12 20:31:11.869498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.436 ms 00:20:27.900 [2024-12-12 20:31:11.869505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.901 [2024-12-12 20:31:11.869623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.901 [2024-12-12 20:31:11.869633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:27.901 [2024-12-12 20:31:11.869641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:20:27.901 [2024-12-12 20:31:11.869649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.901 [2024-12-12 20:31:11.913230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.901 [2024-12-12 20:31:11.913268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:27.901 [2024-12-12 20:31:11.913282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.557 ms 00:20:27.901 [2024-12-12 20:31:11.913290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.901 [2024-12-12 20:31:11.913377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.901 [2024-12-12 20:31:11.913388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:27.901 [2024-12-12 20:31:11.913397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:27.901 [2024-12-12 20:31:11.913404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.901 [2024-12-12 20:31:11.913758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.901 [2024-12-12 20:31:11.913778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:27.901 [2024-12-12 20:31:11.913788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.315 ms 00:20:27.901 [2024-12-12 20:31:11.913799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.901 [2024-12-12 20:31:11.913923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.901 [2024-12-12 20:31:11.913932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:27.901 [2024-12-12 20:31:11.913940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:20:27.901 [2024-12-12 20:31:11.913947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.901 [2024-12-12 20:31:11.927505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.901 [2024-12-12 20:31:11.927650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:27.901 [2024-12-12 20:31:11.927665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.537 ms 00:20:27.901 [2024-12-12 20:31:11.927673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.901 [2024-12-12 20:31:11.940240] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:27.901 [2024-12-12 20:31:11.940272] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:27.901 [2024-12-12 20:31:11.940284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.901 [2024-12-12 20:31:11.940291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:27.901 [2024-12-12 20:31:11.940301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.509 ms 00:20:27.901 [2024-12-12 20:31:11.940307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.901 [2024-12-12 20:31:11.964124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.901 [2024-12-12 20:31:11.964170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:27.901 [2024-12-12 20:31:11.964180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.747 ms 00:20:27.901 [2024-12-12 20:31:11.964187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.901 [2024-12-12 20:31:11.975424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.901 [2024-12-12 20:31:11.975454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:27.901 [2024-12-12 20:31:11.975464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.171 ms 00:20:27.901 [2024-12-12 20:31:11.975471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.901 [2024-12-12 20:31:11.986501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.901 [2024-12-12 20:31:11.986621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:27.901 [2024-12-12 20:31:11.986636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.969 ms 00:20:27.901 [2024-12-12 20:31:11.986643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.901 [2024-12-12 20:31:11.987259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.901 [2024-12-12 20:31:11.987279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:27.901 [2024-12-12 20:31:11.987288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.527 ms 00:20:27.901 [2024-12-12 20:31:11.987295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.901 [2024-12-12 20:31:12.041922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.901 [2024-12-12 20:31:12.042093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:27.901 [2024-12-12 20:31:12.042110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.599 ms 00:20:27.901 [2024-12-12 20:31:12.042119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.901 [2024-12-12 20:31:12.052220] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:27.901 [2024-12-12 20:31:12.066278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.901 [2024-12-12 20:31:12.066315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:27.901 [2024-12-12 20:31:12.066326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.078 ms 00:20:27.901 [2024-12-12 20:31:12.066338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.901 [2024-12-12 20:31:12.066410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.901 [2024-12-12 20:31:12.066445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:27.901 [2024-12-12 20:31:12.066455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:27.901 [2024-12-12 20:31:12.066462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.901 [2024-12-12 20:31:12.066509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.901 [2024-12-12 20:31:12.066518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:27.901 [2024-12-12 20:31:12.066526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:20:27.901 [2024-12-12 20:31:12.066536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.901 [2024-12-12 20:31:12.066562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.901 [2024-12-12 20:31:12.066575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:27.901 [2024-12-12 20:31:12.066582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:27.901 [2024-12-12 20:31:12.066589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.901 [2024-12-12 20:31:12.066620] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:27.901 [2024-12-12 20:31:12.066629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.901 [2024-12-12 20:31:12.066637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:27.901 [2024-12-12 20:31:12.066644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:27.901 [2024-12-12 20:31:12.066651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.901 [2024-12-12 20:31:12.089671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.901 [2024-12-12 20:31:12.089707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:27.901 [2024-12-12 20:31:12.089719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.999 ms 00:20:27.901 [2024-12-12 20:31:12.089727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.901 [2024-12-12 20:31:12.089810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.901 [2024-12-12 20:31:12.089821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:27.901 [2024-12-12 20:31:12.089829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:20:27.901 [2024-12-12 20:31:12.089836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.901 [2024-12-12 20:31:12.090595] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:27.901 [2024-12-12 20:31:12.093601] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 275.055 ms, result 0 00:20:27.901 [2024-12-12 20:31:12.094387] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:27.901 [2024-12-12 20:31:12.107135] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:28.161  [2024-12-12T20:31:12.389Z] Copying: 4096/4096 [kB] (average 42 MBps)[2024-12-12 20:31:12.204215] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:28.161 [2024-12-12 20:31:12.212846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.161 [2024-12-12 20:31:12.212878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:28.161 [2024-12-12 20:31:12.212893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:28.161 [2024-12-12 20:31:12.212901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.161 [2024-12-12 20:31:12.212922] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:28.161 [2024-12-12 20:31:12.215498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.161 [2024-12-12 20:31:12.215622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:28.161 [2024-12-12 20:31:12.215637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.564 ms 00:20:28.161 [2024-12-12 20:31:12.215645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.161 [2024-12-12 20:31:12.218093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.161 [2024-12-12 20:31:12.218200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:28.161 [2024-12-12 20:31:12.218234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.407 ms 00:20:28.161 [2024-12-12 20:31:12.218259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.161 [2024-12-12 20:31:12.230792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.161 [2024-12-12 20:31:12.231117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:28.161 [2024-12-12 20:31:12.231163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.438 ms 00:20:28.161 [2024-12-12 20:31:12.231186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.161 [2024-12-12 20:31:12.238242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.161 [2024-12-12 20:31:12.238331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:28.161 [2024-12-12 20:31:12.238381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.004 ms 00:20:28.161 [2024-12-12 20:31:12.238403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.161 [2024-12-12 20:31:12.261286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.161 [2024-12-12 20:31:12.261396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:28.161 [2024-12-12 20:31:12.261467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.816 ms 00:20:28.161 [2024-12-12 20:31:12.261489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.161 [2024-12-12 20:31:12.275298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.161 [2024-12-12 20:31:12.275401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:28.161 [2024-12-12 20:31:12.275469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.775 ms 00:20:28.161 [2024-12-12 20:31:12.275492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.161 [2024-12-12 20:31:12.275644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.161 [2024-12-12 20:31:12.275671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:28.161 [2024-12-12 20:31:12.275742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:20:28.161 [2024-12-12 20:31:12.275764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.161 [2024-12-12 20:31:12.298404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.161 [2024-12-12 20:31:12.298510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:28.161 [2024-12-12 20:31:12.298556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.612 ms 00:20:28.161 [2024-12-12 20:31:12.298576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.161 [2024-12-12 20:31:12.321079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.161 [2024-12-12 20:31:12.321178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:28.161 [2024-12-12 20:31:12.321224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.470 ms 00:20:28.161 [2024-12-12 20:31:12.321244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.161 [2024-12-12 20:31:12.342926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.161 [2024-12-12 20:31:12.343026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:28.161 [2024-12-12 20:31:12.343075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.650 ms 00:20:28.161 [2024-12-12 20:31:12.343096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.161 [2024-12-12 20:31:12.365151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.161 [2024-12-12 20:31:12.365251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:28.161 [2024-12-12 20:31:12.365298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.954 ms 00:20:28.161 [2024-12-12 20:31:12.365319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.161 [2024-12-12 20:31:12.365350] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:28.161 [2024-12-12 20:31:12.365375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:28.161 [2024-12-12 20:31:12.365469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:28.161 [2024-12-12 20:31:12.365519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:28.161 [2024-12-12 20:31:12.365548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:28.161 [2024-12-12 20:31:12.365575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:28.161 [2024-12-12 20:31:12.365603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:28.161 [2024-12-12 20:31:12.365631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:28.161 [2024-12-12 20:31:12.365658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:28.161 [2024-12-12 20:31:12.365686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:28.161 [2024-12-12 20:31:12.365785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:28.161 [2024-12-12 20:31:12.365818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:28.161 [2024-12-12 20:31:12.365846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:28.161 [2024-12-12 20:31:12.365873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:28.161 [2024-12-12 20:31:12.365940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:28.161 [2024-12-12 20:31:12.365970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:28.161 [2024-12-12 20:31:12.365998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:28.161 [2024-12-12 20:31:12.366025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:28.161 [2024-12-12 20:31:12.366110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:28.161 [2024-12-12 20:31:12.366174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:28.161 [2024-12-12 20:31:12.366204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:28.161 [2024-12-12 20:31:12.366232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:28.161 [2024-12-12 20:31:12.366328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:28.161 [2024-12-12 20:31:12.366356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:28.161 [2024-12-12 20:31:12.366384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:28.161 [2024-12-12 20:31:12.366421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:28.161 [2024-12-12 20:31:12.366453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:28.161 [2024-12-12 20:31:12.366573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:28.161 [2024-12-12 20:31:12.366604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:28.161 [2024-12-12 20:31:12.366632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:28.162 [2024-12-12 20:31:12.366694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:28.162 [2024-12-12 20:31:12.366725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:28.162 [2024-12-12 20:31:12.366754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:28.162 [2024-12-12 20:31:12.366792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:28.162 [2024-12-12 20:31:12.366873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:28.162 [2024-12-12 20:31:12.366925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:28.162 [2024-12-12 20:31:12.366955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:28.162 [2024-12-12 20:31:12.366983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:28.162 [2024-12-12 20:31:12.367081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:28.162 [2024-12-12 20:31:12.367118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:28.162 [2024-12-12 20:31:12.367146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:28.162 [2024-12-12 20:31:12.367206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:28.162 [2024-12-12 20:31:12.367238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:28.162 [2024-12-12 20:31:12.367265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:28.162 [2024-12-12 20:31:12.367292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:28.162 [2024-12-12 20:31:12.367373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:28.162 [2024-12-12 20:31:12.367436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:28.162 [2024-12-12 20:31:12.367466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:28.162 [2024-12-12 20:31:12.367494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:28.162 [2024-12-12 20:31:12.367522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:28.162 [2024-12-12 20:31:12.367584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:28.162 [2024-12-12 20:31:12.367613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:28.162 [2024-12-12 20:31:12.367641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:28.162 [2024-12-12 20:31:12.367669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:28.162 [2024-12-12 20:31:12.367744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:28.162 [2024-12-12 20:31:12.367773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:28.162 [2024-12-12 20:31:12.367800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:28.162 [2024-12-12 20:31:12.367858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:28.162 [2024-12-12 20:31:12.367887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:28.162 [2024-12-12 20:31:12.367943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:28.162 [2024-12-12 20:31:12.367973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:28.162 [2024-12-12 20:31:12.368001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:28.162 [2024-12-12 20:31:12.368055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:28.162 [2024-12-12 20:31:12.368083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:28.162 [2024-12-12 20:31:12.368193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:28.162 [2024-12-12 20:31:12.368202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:28.162 [2024-12-12 20:31:12.368209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:28.162 [2024-12-12 20:31:12.368217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:28.162 [2024-12-12 20:31:12.368224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:28.162 [2024-12-12 20:31:12.368232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:28.162 [2024-12-12 20:31:12.368239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:28.162 [2024-12-12 20:31:12.368246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:28.162 [2024-12-12 20:31:12.368254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:28.162 [2024-12-12 20:31:12.368261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:28.162 [2024-12-12 20:31:12.368269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:28.162 [2024-12-12 20:31:12.368276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:28.162 [2024-12-12 20:31:12.368283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:28.162 [2024-12-12 20:31:12.368290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:28.162 [2024-12-12 20:31:12.368298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:28.162 [2024-12-12 20:31:12.368305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:28.162 [2024-12-12 20:31:12.368313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:28.162 [2024-12-12 20:31:12.368320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:28.162 [2024-12-12 20:31:12.368328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:28.162 [2024-12-12 20:31:12.368335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:28.162 [2024-12-12 20:31:12.368342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:28.162 [2024-12-12 20:31:12.368349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:28.162 [2024-12-12 20:31:12.368356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:28.162 [2024-12-12 20:31:12.368365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:28.162 [2024-12-12 20:31:12.368372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:28.162 [2024-12-12 20:31:12.368379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:28.162 [2024-12-12 20:31:12.368387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:28.162 [2024-12-12 20:31:12.368394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:28.162 [2024-12-12 20:31:12.368401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:28.162 [2024-12-12 20:31:12.368408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:28.162 [2024-12-12 20:31:12.368434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:28.162 [2024-12-12 20:31:12.368442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:28.162 [2024-12-12 20:31:12.368450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:28.162 [2024-12-12 20:31:12.368457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:28.162 [2024-12-12 20:31:12.368464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:28.162 [2024-12-12 20:31:12.368471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:28.162 [2024-12-12 20:31:12.368479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:28.162 [2024-12-12 20:31:12.368494] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:28.162 [2024-12-12 20:31:12.368502] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6ce9d3d9-3dbe-48b1-9531-60bd0b669e2c 00:20:28.162 [2024-12-12 20:31:12.368509] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:28.162 [2024-12-12 20:31:12.368516] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:28.162 [2024-12-12 20:31:12.368523] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:28.162 [2024-12-12 20:31:12.368531] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:28.162 [2024-12-12 20:31:12.368537] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:28.162 [2024-12-12 20:31:12.368544] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:28.162 [2024-12-12 20:31:12.368554] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:28.162 [2024-12-12 20:31:12.368560] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:28.162 [2024-12-12 20:31:12.368566] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:28.162 [2024-12-12 20:31:12.368574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.162 [2024-12-12 20:31:12.368581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:28.162 [2024-12-12 20:31:12.368590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.224 ms 00:20:28.162 [2024-12-12 20:31:12.368596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.162 [2024-12-12 20:31:12.382687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.162 [2024-12-12 20:31:12.382717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:28.162 [2024-12-12 20:31:12.382727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.056 ms 00:20:28.162 [2024-12-12 20:31:12.382735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.162 [2024-12-12 20:31:12.383095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.162 [2024-12-12 20:31:12.383108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:28.162 [2024-12-12 20:31:12.383117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.319 ms 00:20:28.162 [2024-12-12 20:31:12.383124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.421 [2024-12-12 20:31:12.417998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:28.421 [2024-12-12 20:31:12.418029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:28.421 [2024-12-12 20:31:12.418038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:28.421 [2024-12-12 20:31:12.418049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.421 [2024-12-12 20:31:12.418116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:28.421 [2024-12-12 20:31:12.418124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:28.421 [2024-12-12 20:31:12.418132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:28.421 [2024-12-12 20:31:12.418139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.421 [2024-12-12 20:31:12.418175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:28.421 [2024-12-12 20:31:12.418184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:28.421 [2024-12-12 20:31:12.418192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:28.421 [2024-12-12 20:31:12.418199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.421 [2024-12-12 20:31:12.418218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:28.421 [2024-12-12 20:31:12.418225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:28.421 [2024-12-12 20:31:12.418232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:28.421 [2024-12-12 20:31:12.418239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.421 [2024-12-12 20:31:12.495773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:28.421 [2024-12-12 20:31:12.495815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:28.421 [2024-12-12 20:31:12.495825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:28.421 [2024-12-12 20:31:12.495833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.421 [2024-12-12 20:31:12.559384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:28.421 [2024-12-12 20:31:12.559428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:28.421 [2024-12-12 20:31:12.559439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:28.422 [2024-12-12 20:31:12.559447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.422 [2024-12-12 20:31:12.559498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:28.422 [2024-12-12 20:31:12.559507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:28.422 [2024-12-12 20:31:12.559515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:28.422 [2024-12-12 20:31:12.559522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.422 [2024-12-12 20:31:12.559550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:28.422 [2024-12-12 20:31:12.559563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:28.422 [2024-12-12 20:31:12.559570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:28.422 [2024-12-12 20:31:12.559577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.422 [2024-12-12 20:31:12.559658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:28.422 [2024-12-12 20:31:12.559667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:28.422 [2024-12-12 20:31:12.559675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:28.422 [2024-12-12 20:31:12.559682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.422 [2024-12-12 20:31:12.559710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:28.422 [2024-12-12 20:31:12.559719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:28.422 [2024-12-12 20:31:12.559729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:28.422 [2024-12-12 20:31:12.559737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.422 [2024-12-12 20:31:12.559774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:28.422 [2024-12-12 20:31:12.559782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:28.422 [2024-12-12 20:31:12.559789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:28.422 [2024-12-12 20:31:12.559797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.422 [2024-12-12 20:31:12.559837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:28.422 [2024-12-12 20:31:12.559850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:28.422 [2024-12-12 20:31:12.559857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:28.422 [2024-12-12 20:31:12.559864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.422 [2024-12-12 20:31:12.559992] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 347.106 ms, result 0 00:20:29.357 00:20:29.357 00:20:29.357 20:31:13 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=78517 00:20:29.357 20:31:13 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 78517 00:20:29.357 20:31:13 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:20:29.357 20:31:13 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78517 ']' 00:20:29.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:29.357 20:31:13 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:29.357 20:31:13 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:29.357 20:31:13 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:29.357 20:31:13 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:29.357 20:31:13 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:29.357 [2024-12-12 20:31:13.363141] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:20:29.357 [2024-12-12 20:31:13.363278] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78517 ] 00:20:29.357 [2024-12-12 20:31:13.531885] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:29.615 [2024-12-12 20:31:13.630289] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:30.217 20:31:14 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:30.217 20:31:14 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:20:30.217 20:31:14 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:20:30.218 [2024-12-12 20:31:14.417300] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:30.218 [2024-12-12 20:31:14.417500] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:30.477 [2024-12-12 20:31:14.587492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.477 [2024-12-12 20:31:14.587537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:30.477 [2024-12-12 20:31:14.587551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:30.477 [2024-12-12 20:31:14.587559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.477 [2024-12-12 20:31:14.590172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.477 [2024-12-12 20:31:14.590205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:30.477 [2024-12-12 20:31:14.590216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.594 ms 00:20:30.477 [2024-12-12 20:31:14.590224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.477 [2024-12-12 20:31:14.590296] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:30.477 [2024-12-12 20:31:14.591024] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:30.477 [2024-12-12 20:31:14.591050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.477 [2024-12-12 20:31:14.591058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:30.477 [2024-12-12 20:31:14.591068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.764 ms 00:20:30.477 [2024-12-12 20:31:14.591075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.477 [2024-12-12 20:31:14.592253] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:30.477 [2024-12-12 20:31:14.604351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.477 [2024-12-12 20:31:14.604388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:30.477 [2024-12-12 20:31:14.604399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.100 ms 00:20:30.477 [2024-12-12 20:31:14.604409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.477 [2024-12-12 20:31:14.604507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.477 [2024-12-12 20:31:14.604520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:30.477 [2024-12-12 20:31:14.604528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:20:30.477 [2024-12-12 20:31:14.604537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.477 [2024-12-12 20:31:14.609683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.477 [2024-12-12 20:31:14.609717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:30.477 [2024-12-12 20:31:14.609726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.100 ms 00:20:30.477 [2024-12-12 20:31:14.609735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.477 [2024-12-12 20:31:14.609824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.477 [2024-12-12 20:31:14.609835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:30.477 [2024-12-12 20:31:14.609843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:20:30.477 [2024-12-12 20:31:14.609855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.477 [2024-12-12 20:31:14.609877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.477 [2024-12-12 20:31:14.609887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:30.477 [2024-12-12 20:31:14.609894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:30.477 [2024-12-12 20:31:14.609903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.477 [2024-12-12 20:31:14.609924] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:30.477 [2024-12-12 20:31:14.613195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.477 [2024-12-12 20:31:14.613218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:30.477 [2024-12-12 20:31:14.613229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.273 ms 00:20:30.477 [2024-12-12 20:31:14.613237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.477 [2024-12-12 20:31:14.613274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.477 [2024-12-12 20:31:14.613282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:30.477 [2024-12-12 20:31:14.613291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:30.477 [2024-12-12 20:31:14.613301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.477 [2024-12-12 20:31:14.613321] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:30.477 [2024-12-12 20:31:14.613338] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:30.477 [2024-12-12 20:31:14.613378] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:30.477 [2024-12-12 20:31:14.613393] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:30.477 [2024-12-12 20:31:14.613512] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:30.477 [2024-12-12 20:31:14.613523] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:30.477 [2024-12-12 20:31:14.613538] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:30.477 [2024-12-12 20:31:14.613547] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:30.477 [2024-12-12 20:31:14.613557] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:30.477 [2024-12-12 20:31:14.613565] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:30.477 [2024-12-12 20:31:14.613573] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:30.477 [2024-12-12 20:31:14.613581] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:30.477 [2024-12-12 20:31:14.613599] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:30.477 [2024-12-12 20:31:14.613606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.477 [2024-12-12 20:31:14.613615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:30.477 [2024-12-12 20:31:14.613623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.289 ms 00:20:30.477 [2024-12-12 20:31:14.613631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.477 [2024-12-12 20:31:14.613718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.477 [2024-12-12 20:31:14.613728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:30.477 [2024-12-12 20:31:14.613735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:30.477 [2024-12-12 20:31:14.613743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.477 [2024-12-12 20:31:14.613852] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:30.477 [2024-12-12 20:31:14.613864] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:30.477 [2024-12-12 20:31:14.613872] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:30.477 [2024-12-12 20:31:14.613881] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:30.477 [2024-12-12 20:31:14.613888] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:30.477 [2024-12-12 20:31:14.613897] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:30.477 [2024-12-12 20:31:14.613904] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:30.477 [2024-12-12 20:31:14.613914] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:30.477 [2024-12-12 20:31:14.613921] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:30.477 [2024-12-12 20:31:14.613929] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:30.477 [2024-12-12 20:31:14.613936] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:30.477 [2024-12-12 20:31:14.613944] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:30.477 [2024-12-12 20:31:14.613950] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:30.477 [2024-12-12 20:31:14.613958] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:30.477 [2024-12-12 20:31:14.613964] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:30.477 [2024-12-12 20:31:14.613972] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:30.477 [2024-12-12 20:31:14.613978] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:30.477 [2024-12-12 20:31:14.613987] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:30.477 [2024-12-12 20:31:14.613998] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:30.477 [2024-12-12 20:31:14.614007] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:30.477 [2024-12-12 20:31:14.614013] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:30.477 [2024-12-12 20:31:14.614021] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:30.477 [2024-12-12 20:31:14.614028] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:30.477 [2024-12-12 20:31:14.614038] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:30.477 [2024-12-12 20:31:14.614045] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:30.477 [2024-12-12 20:31:14.614053] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:30.477 [2024-12-12 20:31:14.614060] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:30.477 [2024-12-12 20:31:14.614067] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:30.477 [2024-12-12 20:31:14.614074] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:30.477 [2024-12-12 20:31:14.614083] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:30.477 [2024-12-12 20:31:14.614089] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:30.477 [2024-12-12 20:31:14.614097] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:30.477 [2024-12-12 20:31:14.614103] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:30.477 [2024-12-12 20:31:14.614111] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:30.477 [2024-12-12 20:31:14.614117] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:30.477 [2024-12-12 20:31:14.614125] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:30.478 [2024-12-12 20:31:14.614131] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:30.478 [2024-12-12 20:31:14.614139] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:30.478 [2024-12-12 20:31:14.614146] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:30.478 [2024-12-12 20:31:14.614155] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:30.478 [2024-12-12 20:31:14.614161] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:30.478 [2024-12-12 20:31:14.614169] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:30.478 [2024-12-12 20:31:14.614175] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:30.478 [2024-12-12 20:31:14.614183] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:30.478 [2024-12-12 20:31:14.614192] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:30.478 [2024-12-12 20:31:14.614200] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:30.478 [2024-12-12 20:31:14.614207] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:30.478 [2024-12-12 20:31:14.614216] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:30.478 [2024-12-12 20:31:14.614222] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:30.478 [2024-12-12 20:31:14.614230] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:30.478 [2024-12-12 20:31:14.614236] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:30.478 [2024-12-12 20:31:14.614244] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:30.478 [2024-12-12 20:31:14.614251] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:30.478 [2024-12-12 20:31:14.614260] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:30.478 [2024-12-12 20:31:14.614268] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:30.478 [2024-12-12 20:31:14.614282] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:30.478 [2024-12-12 20:31:14.614290] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:30.478 [2024-12-12 20:31:14.614298] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:30.478 [2024-12-12 20:31:14.614305] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:30.478 [2024-12-12 20:31:14.614313] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:30.478 [2024-12-12 20:31:14.614320] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:30.478 [2024-12-12 20:31:14.614329] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:30.478 [2024-12-12 20:31:14.614336] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:30.478 [2024-12-12 20:31:14.614344] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:30.478 [2024-12-12 20:31:14.614351] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:30.478 [2024-12-12 20:31:14.614360] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:30.478 [2024-12-12 20:31:14.614367] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:30.478 [2024-12-12 20:31:14.614376] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:30.478 [2024-12-12 20:31:14.614382] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:30.478 [2024-12-12 20:31:14.614390] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:30.478 [2024-12-12 20:31:14.614398] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:30.478 [2024-12-12 20:31:14.614409] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:30.478 [2024-12-12 20:31:14.614427] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:30.478 [2024-12-12 20:31:14.614436] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:30.478 [2024-12-12 20:31:14.614443] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:30.478 [2024-12-12 20:31:14.614452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.478 [2024-12-12 20:31:14.614459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:30.478 [2024-12-12 20:31:14.614468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.667 ms 00:20:30.478 [2024-12-12 20:31:14.614477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.478 [2024-12-12 20:31:14.640582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.478 [2024-12-12 20:31:14.640614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:30.478 [2024-12-12 20:31:14.640626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.033 ms 00:20:30.478 [2024-12-12 20:31:14.640636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.478 [2024-12-12 20:31:14.640747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.478 [2024-12-12 20:31:14.640757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:30.478 [2024-12-12 20:31:14.640766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:20:30.478 [2024-12-12 20:31:14.640773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.478 [2024-12-12 20:31:14.671167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.478 [2024-12-12 20:31:14.671199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:30.478 [2024-12-12 20:31:14.671211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.369 ms 00:20:30.478 [2024-12-12 20:31:14.671218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.478 [2024-12-12 20:31:14.671272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.478 [2024-12-12 20:31:14.671282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:30.478 [2024-12-12 20:31:14.671292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:20:30.478 [2024-12-12 20:31:14.671299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.478 [2024-12-12 20:31:14.671647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.478 [2024-12-12 20:31:14.671660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:30.478 [2024-12-12 20:31:14.671672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.326 ms 00:20:30.478 [2024-12-12 20:31:14.671679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.478 [2024-12-12 20:31:14.671801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.478 [2024-12-12 20:31:14.671814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:30.478 [2024-12-12 20:31:14.671823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:20:30.478 [2024-12-12 20:31:14.671831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.478 [2024-12-12 20:31:14.686217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.478 [2024-12-12 20:31:14.686246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:30.478 [2024-12-12 20:31:14.686257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.364 ms 00:20:30.478 [2024-12-12 20:31:14.686264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.737 [2024-12-12 20:31:14.717516] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:30.737 [2024-12-12 20:31:14.717554] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:30.737 [2024-12-12 20:31:14.717570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.737 [2024-12-12 20:31:14.717578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:30.737 [2024-12-12 20:31:14.717590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.195 ms 00:20:30.737 [2024-12-12 20:31:14.717603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.737 [2024-12-12 20:31:14.741636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.737 [2024-12-12 20:31:14.741670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:30.737 [2024-12-12 20:31:14.741683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.962 ms 00:20:30.737 [2024-12-12 20:31:14.741691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.737 [2024-12-12 20:31:14.753053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.737 [2024-12-12 20:31:14.753080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:30.737 [2024-12-12 20:31:14.753093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.291 ms 00:20:30.737 [2024-12-12 20:31:14.753100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.737 [2024-12-12 20:31:14.764069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.737 [2024-12-12 20:31:14.764192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:30.737 [2024-12-12 20:31:14.764212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.908 ms 00:20:30.737 [2024-12-12 20:31:14.764219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.737 [2024-12-12 20:31:14.764841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.737 [2024-12-12 20:31:14.764859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:30.737 [2024-12-12 20:31:14.764870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.536 ms 00:20:30.737 [2024-12-12 20:31:14.764877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.737 [2024-12-12 20:31:14.819260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.737 [2024-12-12 20:31:14.819427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:30.737 [2024-12-12 20:31:14.819451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.353 ms 00:20:30.737 [2024-12-12 20:31:14.819460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.737 [2024-12-12 20:31:14.829593] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:30.737 [2024-12-12 20:31:14.844018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.737 [2024-12-12 20:31:14.844055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:30.737 [2024-12-12 20:31:14.844070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.464 ms 00:20:30.737 [2024-12-12 20:31:14.844081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.737 [2024-12-12 20:31:14.844155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.737 [2024-12-12 20:31:14.844166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:30.737 [2024-12-12 20:31:14.844175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:30.737 [2024-12-12 20:31:14.844184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.737 [2024-12-12 20:31:14.844231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.737 [2024-12-12 20:31:14.844246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:30.737 [2024-12-12 20:31:14.844254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:20:30.737 [2024-12-12 20:31:14.844264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.737 [2024-12-12 20:31:14.844286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.737 [2024-12-12 20:31:14.844296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:30.737 [2024-12-12 20:31:14.844304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:30.737 [2024-12-12 20:31:14.844315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.737 [2024-12-12 20:31:14.844345] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:30.737 [2024-12-12 20:31:14.844358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.737 [2024-12-12 20:31:14.844368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:30.737 [2024-12-12 20:31:14.844377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:30.737 [2024-12-12 20:31:14.844384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.737 [2024-12-12 20:31:14.867787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.737 [2024-12-12 20:31:14.867819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:30.737 [2024-12-12 20:31:14.867833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.376 ms 00:20:30.737 [2024-12-12 20:31:14.867841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.737 [2024-12-12 20:31:14.867922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.737 [2024-12-12 20:31:14.867933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:30.737 [2024-12-12 20:31:14.867943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:20:30.737 [2024-12-12 20:31:14.867952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.737 [2024-12-12 20:31:14.869078] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:30.737 [2024-12-12 20:31:14.872028] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 281.293 ms, result 0 00:20:30.737 [2024-12-12 20:31:14.873133] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:30.737 Some configs were skipped because the RPC state that can call them passed over. 00:20:30.737 20:31:14 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:20:30.998 [2024-12-12 20:31:15.111796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.998 [2024-12-12 20:31:15.111956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:30.998 [2024-12-12 20:31:15.112353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.640 ms 00:20:30.998 [2024-12-12 20:31:15.112400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.998 [2024-12-12 20:31:15.112513] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.351 ms, result 0 00:20:30.998 true 00:20:30.998 20:31:15 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:20:31.257 [2024-12-12 20:31:15.311937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.257 [2024-12-12 20:31:15.312100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:31.257 [2024-12-12 20:31:15.312158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.498 ms 00:20:31.257 [2024-12-12 20:31:15.312181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.257 [2024-12-12 20:31:15.312233] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.805 ms, result 0 00:20:31.257 true 00:20:31.257 20:31:15 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 78517 00:20:31.257 20:31:15 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78517 ']' 00:20:31.257 20:31:15 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78517 00:20:31.257 20:31:15 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:20:31.257 20:31:15 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:31.257 20:31:15 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78517 00:20:31.257 20:31:15 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:31.257 20:31:15 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:31.257 20:31:15 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78517' 00:20:31.257 killing process with pid 78517 00:20:31.257 20:31:15 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78517 00:20:31.257 20:31:15 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78517 00:20:31.823 [2024-12-12 20:31:16.042732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.823 [2024-12-12 20:31:16.043274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:31.823 [2024-12-12 20:31:16.043347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:31.823 [2024-12-12 20:31:16.043375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.823 [2024-12-12 20:31:16.043434] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:31.823 [2024-12-12 20:31:16.046099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.823 [2024-12-12 20:31:16.046197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:31.823 [2024-12-12 20:31:16.046259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.573 ms 00:20:31.823 [2024-12-12 20:31:16.046281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.823 [2024-12-12 20:31:16.046601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.823 [2024-12-12 20:31:16.046679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:31.823 [2024-12-12 20:31:16.046735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:20:31.823 [2024-12-12 20:31:16.046757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.823 [2024-12-12 20:31:16.050765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.823 [2024-12-12 20:31:16.050864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:31.823 [2024-12-12 20:31:16.050939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.974 ms 00:20:31.823 [2024-12-12 20:31:16.050962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.083 [2024-12-12 20:31:16.057838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.083 [2024-12-12 20:31:16.057938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:32.083 [2024-12-12 20:31:16.057998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.828 ms 00:20:32.083 [2024-12-12 20:31:16.058020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.083 [2024-12-12 20:31:16.067702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.083 [2024-12-12 20:31:16.067805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:32.083 [2024-12-12 20:31:16.067856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.617 ms 00:20:32.083 [2024-12-12 20:31:16.067877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.083 [2024-12-12 20:31:16.074912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.083 [2024-12-12 20:31:16.075021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:32.083 [2024-12-12 20:31:16.075078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.988 ms 00:20:32.083 [2024-12-12 20:31:16.075100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.083 [2024-12-12 20:31:16.075260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.083 [2024-12-12 20:31:16.075286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:32.083 [2024-12-12 20:31:16.075308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:20:32.083 [2024-12-12 20:31:16.075358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.083 [2024-12-12 20:31:16.085060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.083 [2024-12-12 20:31:16.085158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:32.083 [2024-12-12 20:31:16.085213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.666 ms 00:20:32.083 [2024-12-12 20:31:16.085223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.083 [2024-12-12 20:31:16.094551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.083 [2024-12-12 20:31:16.094580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:32.083 [2024-12-12 20:31:16.094592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.283 ms 00:20:32.083 [2024-12-12 20:31:16.094599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.083 [2024-12-12 20:31:16.103382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.083 [2024-12-12 20:31:16.103410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:32.083 [2024-12-12 20:31:16.103435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.744 ms 00:20:32.083 [2024-12-12 20:31:16.103442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.083 [2024-12-12 20:31:16.112204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.083 [2024-12-12 20:31:16.112233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:32.083 [2024-12-12 20:31:16.112244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.694 ms 00:20:32.083 [2024-12-12 20:31:16.112251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.083 [2024-12-12 20:31:16.112284] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:32.083 [2024-12-12 20:31:16.112298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:32.083 [2024-12-12 20:31:16.112309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:32.083 [2024-12-12 20:31:16.112317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:32.083 [2024-12-12 20:31:16.112326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:32.083 [2024-12-12 20:31:16.112334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:32.083 [2024-12-12 20:31:16.112345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:32.083 [2024-12-12 20:31:16.112353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:32.083 [2024-12-12 20:31:16.112362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:32.083 [2024-12-12 20:31:16.112369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:32.083 [2024-12-12 20:31:16.112378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:32.083 [2024-12-12 20:31:16.112386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:32.083 [2024-12-12 20:31:16.112395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:32.083 [2024-12-12 20:31:16.112402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:32.083 [2024-12-12 20:31:16.112432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:32.083 [2024-12-12 20:31:16.112440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:32.083 [2024-12-12 20:31:16.112449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:32.083 [2024-12-12 20:31:16.112456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:32.083 [2024-12-12 20:31:16.112467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:32.083 [2024-12-12 20:31:16.112475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:32.083 [2024-12-12 20:31:16.112484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:32.083 [2024-12-12 20:31:16.112491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:32.083 [2024-12-12 20:31:16.112501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:32.083 [2024-12-12 20:31:16.112509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:32.083 [2024-12-12 20:31:16.112518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:32.083 [2024-12-12 20:31:16.112525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:32.083 [2024-12-12 20:31:16.112534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:32.083 [2024-12-12 20:31:16.112541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:32.083 [2024-12-12 20:31:16.112550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:32.083 [2024-12-12 20:31:16.112558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:32.083 [2024-12-12 20:31:16.112566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:32.083 [2024-12-12 20:31:16.112573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:32.083 [2024-12-12 20:31:16.112582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:32.083 [2024-12-12 20:31:16.112589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:32.083 [2024-12-12 20:31:16.112598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:32.083 [2024-12-12 20:31:16.112606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:32.083 [2024-12-12 20:31:16.112615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:32.083 [2024-12-12 20:31:16.112622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:32.083 [2024-12-12 20:31:16.112632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:32.083 [2024-12-12 20:31:16.112640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:32.083 [2024-12-12 20:31:16.112649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:32.083 [2024-12-12 20:31:16.112657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:32.083 [2024-12-12 20:31:16.112665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:32.083 [2024-12-12 20:31:16.112672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:32.083 [2024-12-12 20:31:16.112682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:32.083 [2024-12-12 20:31:16.112689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:32.083 [2024-12-12 20:31:16.112698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:32.083 [2024-12-12 20:31:16.112705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:32.083 [2024-12-12 20:31:16.112713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:32.083 [2024-12-12 20:31:16.112720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:32.083 [2024-12-12 20:31:16.112729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:32.083 [2024-12-12 20:31:16.112736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:32.083 [2024-12-12 20:31:16.112745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:32.083 [2024-12-12 20:31:16.112752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:32.083 [2024-12-12 20:31:16.112762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:32.084 [2024-12-12 20:31:16.112769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:32.084 [2024-12-12 20:31:16.112778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:32.084 [2024-12-12 20:31:16.112785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:32.084 [2024-12-12 20:31:16.112794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:32.084 [2024-12-12 20:31:16.112801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:32.084 [2024-12-12 20:31:16.112809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:32.084 [2024-12-12 20:31:16.112817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:32.084 [2024-12-12 20:31:16.112826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:32.084 [2024-12-12 20:31:16.112833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:32.084 [2024-12-12 20:31:16.112841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:32.084 [2024-12-12 20:31:16.112849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:32.084 [2024-12-12 20:31:16.112858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:32.084 [2024-12-12 20:31:16.112866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:32.084 [2024-12-12 20:31:16.112876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:32.084 [2024-12-12 20:31:16.112883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:32.084 [2024-12-12 20:31:16.112894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:32.084 [2024-12-12 20:31:16.112902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:32.084 [2024-12-12 20:31:16.112910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:32.084 [2024-12-12 20:31:16.112917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:32.084 [2024-12-12 20:31:16.112926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:32.084 [2024-12-12 20:31:16.112933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:32.084 [2024-12-12 20:31:16.112942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:32.084 [2024-12-12 20:31:16.112949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:32.084 [2024-12-12 20:31:16.112958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:32.084 [2024-12-12 20:31:16.112965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:32.084 [2024-12-12 20:31:16.112973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:32.084 [2024-12-12 20:31:16.112980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:32.084 [2024-12-12 20:31:16.112989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:32.084 [2024-12-12 20:31:16.112996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:32.084 [2024-12-12 20:31:16.113005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:32.084 [2024-12-12 20:31:16.113012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:32.084 [2024-12-12 20:31:16.113023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:32.084 [2024-12-12 20:31:16.113030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:32.084 [2024-12-12 20:31:16.113038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:32.084 [2024-12-12 20:31:16.113045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:32.084 [2024-12-12 20:31:16.113054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:32.084 [2024-12-12 20:31:16.113061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:32.084 [2024-12-12 20:31:16.113069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:32.084 [2024-12-12 20:31:16.113077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:32.084 [2024-12-12 20:31:16.113085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:32.084 [2024-12-12 20:31:16.113092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:32.084 [2024-12-12 20:31:16.113101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:32.084 [2024-12-12 20:31:16.113109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:32.084 [2024-12-12 20:31:16.113119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:32.084 [2024-12-12 20:31:16.113126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:32.084 [2024-12-12 20:31:16.113135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:32.084 [2024-12-12 20:31:16.113155] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:32.084 [2024-12-12 20:31:16.113168] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6ce9d3d9-3dbe-48b1-9531-60bd0b669e2c 00:20:32.084 [2024-12-12 20:31:16.113178] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:32.084 [2024-12-12 20:31:16.113187] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:32.084 [2024-12-12 20:31:16.113193] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:32.084 [2024-12-12 20:31:16.113203] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:32.084 [2024-12-12 20:31:16.113210] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:32.084 [2024-12-12 20:31:16.113219] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:32.084 [2024-12-12 20:31:16.113226] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:32.084 [2024-12-12 20:31:16.113233] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:32.084 [2024-12-12 20:31:16.113239] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:32.084 [2024-12-12 20:31:16.113247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.084 [2024-12-12 20:31:16.113255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:32.084 [2024-12-12 20:31:16.113264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.964 ms 00:20:32.084 [2024-12-12 20:31:16.113271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.084 [2024-12-12 20:31:16.125524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.084 [2024-12-12 20:31:16.125553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:32.084 [2024-12-12 20:31:16.125567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.229 ms 00:20:32.084 [2024-12-12 20:31:16.125575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.084 [2024-12-12 20:31:16.125934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.084 [2024-12-12 20:31:16.125953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:32.084 [2024-12-12 20:31:16.125965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.318 ms 00:20:32.084 [2024-12-12 20:31:16.125973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.084 [2024-12-12 20:31:16.164481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:32.084 [2024-12-12 20:31:16.164510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:32.084 [2024-12-12 20:31:16.164520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:32.084 [2024-12-12 20:31:16.164527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.084 [2024-12-12 20:31:16.164611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:32.084 [2024-12-12 20:31:16.164619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:32.084 [2024-12-12 20:31:16.164628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:32.084 [2024-12-12 20:31:16.164634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.084 [2024-12-12 20:31:16.164668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:32.084 [2024-12-12 20:31:16.164676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:32.084 [2024-12-12 20:31:16.164684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:32.084 [2024-12-12 20:31:16.164689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.084 [2024-12-12 20:31:16.164705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:32.084 [2024-12-12 20:31:16.164712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:32.084 [2024-12-12 20:31:16.164719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:32.084 [2024-12-12 20:31:16.164726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.084 [2024-12-12 20:31:16.223831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:32.084 [2024-12-12 20:31:16.223879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:32.084 [2024-12-12 20:31:16.223888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:32.084 [2024-12-12 20:31:16.223894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.084 [2024-12-12 20:31:16.273792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:32.084 [2024-12-12 20:31:16.273828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:32.084 [2024-12-12 20:31:16.273838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:32.084 [2024-12-12 20:31:16.273846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.084 [2024-12-12 20:31:16.273911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:32.084 [2024-12-12 20:31:16.273919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:32.084 [2024-12-12 20:31:16.273928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:32.084 [2024-12-12 20:31:16.273934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.084 [2024-12-12 20:31:16.273957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:32.084 [2024-12-12 20:31:16.273964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:32.084 [2024-12-12 20:31:16.273971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:32.084 [2024-12-12 20:31:16.273977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.084 [2024-12-12 20:31:16.274050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:32.084 [2024-12-12 20:31:16.274058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:32.085 [2024-12-12 20:31:16.274065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:32.085 [2024-12-12 20:31:16.274072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.085 [2024-12-12 20:31:16.274098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:32.085 [2024-12-12 20:31:16.274105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:32.085 [2024-12-12 20:31:16.274113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:32.085 [2024-12-12 20:31:16.274118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.085 [2024-12-12 20:31:16.274150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:32.085 [2024-12-12 20:31:16.274157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:32.085 [2024-12-12 20:31:16.274166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:32.085 [2024-12-12 20:31:16.274171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.085 [2024-12-12 20:31:16.274207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:32.085 [2024-12-12 20:31:16.274214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:32.085 [2024-12-12 20:31:16.274221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:32.085 [2024-12-12 20:31:16.274227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.085 [2024-12-12 20:31:16.274331] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 231.566 ms, result 0 00:20:32.651 20:31:16 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:32.651 [2024-12-12 20:31:16.865941] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:20:32.651 [2024-12-12 20:31:16.866057] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78570 ] 00:20:32.910 [2024-12-12 20:31:17.021565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:32.910 [2024-12-12 20:31:17.103355] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:33.168 [2024-12-12 20:31:17.319136] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:33.168 [2024-12-12 20:31:17.319189] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:33.428 [2024-12-12 20:31:17.471423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.428 [2024-12-12 20:31:17.471458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:33.428 [2024-12-12 20:31:17.471469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:33.428 [2024-12-12 20:31:17.471475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.428 [2024-12-12 20:31:17.473597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.428 [2024-12-12 20:31:17.473624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:33.428 [2024-12-12 20:31:17.473632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.109 ms 00:20:33.428 [2024-12-12 20:31:17.473638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.428 [2024-12-12 20:31:17.473695] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:33.428 [2024-12-12 20:31:17.474200] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:33.428 [2024-12-12 20:31:17.474219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.428 [2024-12-12 20:31:17.474226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:33.428 [2024-12-12 20:31:17.474233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.530 ms 00:20:33.428 [2024-12-12 20:31:17.474238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.428 [2024-12-12 20:31:17.475295] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:33.428 [2024-12-12 20:31:17.485365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.428 [2024-12-12 20:31:17.485500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:33.428 [2024-12-12 20:31:17.485515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.071 ms 00:20:33.428 [2024-12-12 20:31:17.485522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.428 [2024-12-12 20:31:17.485592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.428 [2024-12-12 20:31:17.485601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:33.428 [2024-12-12 20:31:17.485608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:20:33.428 [2024-12-12 20:31:17.485614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.428 [2024-12-12 20:31:17.490584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.428 [2024-12-12 20:31:17.490669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:33.428 [2024-12-12 20:31:17.490716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.938 ms 00:20:33.428 [2024-12-12 20:31:17.490733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.428 [2024-12-12 20:31:17.490816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.428 [2024-12-12 20:31:17.490938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:33.428 [2024-12-12 20:31:17.490977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:20:33.428 [2024-12-12 20:31:17.490991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.428 [2024-12-12 20:31:17.491023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.428 [2024-12-12 20:31:17.491039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:33.428 [2024-12-12 20:31:17.491058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:33.428 [2024-12-12 20:31:17.491077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.428 [2024-12-12 20:31:17.491113] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:33.428 [2024-12-12 20:31:17.493800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.428 [2024-12-12 20:31:17.493823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:33.428 [2024-12-12 20:31:17.493830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.695 ms 00:20:33.428 [2024-12-12 20:31:17.493836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.428 [2024-12-12 20:31:17.493868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.428 [2024-12-12 20:31:17.493875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:33.428 [2024-12-12 20:31:17.493882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:33.428 [2024-12-12 20:31:17.493887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.428 [2024-12-12 20:31:17.493902] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:33.428 [2024-12-12 20:31:17.493918] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:33.428 [2024-12-12 20:31:17.493944] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:33.428 [2024-12-12 20:31:17.493956] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:33.428 [2024-12-12 20:31:17.494035] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:33.428 [2024-12-12 20:31:17.494043] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:33.428 [2024-12-12 20:31:17.494051] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:33.428 [2024-12-12 20:31:17.494061] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:33.428 [2024-12-12 20:31:17.494068] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:33.428 [2024-12-12 20:31:17.494074] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:33.428 [2024-12-12 20:31:17.494079] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:33.428 [2024-12-12 20:31:17.494085] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:33.428 [2024-12-12 20:31:17.494090] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:33.428 [2024-12-12 20:31:17.494096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.428 [2024-12-12 20:31:17.494101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:33.428 [2024-12-12 20:31:17.494107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.195 ms 00:20:33.428 [2024-12-12 20:31:17.494113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.428 [2024-12-12 20:31:17.494179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.428 [2024-12-12 20:31:17.494187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:33.428 [2024-12-12 20:31:17.494193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:20:33.428 [2024-12-12 20:31:17.494199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.428 [2024-12-12 20:31:17.494272] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:33.428 [2024-12-12 20:31:17.494279] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:33.428 [2024-12-12 20:31:17.494285] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:33.428 [2024-12-12 20:31:17.494291] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:33.428 [2024-12-12 20:31:17.494297] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:33.428 [2024-12-12 20:31:17.494302] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:33.428 [2024-12-12 20:31:17.494307] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:33.428 [2024-12-12 20:31:17.494313] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:33.428 [2024-12-12 20:31:17.494319] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:33.428 [2024-12-12 20:31:17.494323] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:33.428 [2024-12-12 20:31:17.494329] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:33.428 [2024-12-12 20:31:17.494338] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:33.428 [2024-12-12 20:31:17.494343] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:33.428 [2024-12-12 20:31:17.494349] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:33.428 [2024-12-12 20:31:17.494354] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:33.428 [2024-12-12 20:31:17.494358] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:33.428 [2024-12-12 20:31:17.494364] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:33.428 [2024-12-12 20:31:17.494370] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:33.428 [2024-12-12 20:31:17.494375] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:33.428 [2024-12-12 20:31:17.494380] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:33.428 [2024-12-12 20:31:17.494386] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:33.428 [2024-12-12 20:31:17.494391] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:33.428 [2024-12-12 20:31:17.494396] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:33.428 [2024-12-12 20:31:17.494401] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:33.428 [2024-12-12 20:31:17.494406] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:33.428 [2024-12-12 20:31:17.494411] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:33.428 [2024-12-12 20:31:17.494439] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:33.429 [2024-12-12 20:31:17.494444] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:33.429 [2024-12-12 20:31:17.494450] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:33.429 [2024-12-12 20:31:17.494455] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:33.429 [2024-12-12 20:31:17.494460] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:33.429 [2024-12-12 20:31:17.494465] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:33.429 [2024-12-12 20:31:17.494470] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:33.429 [2024-12-12 20:31:17.494475] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:33.429 [2024-12-12 20:31:17.494480] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:33.429 [2024-12-12 20:31:17.494485] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:33.429 [2024-12-12 20:31:17.494490] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:33.429 [2024-12-12 20:31:17.494495] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:33.429 [2024-12-12 20:31:17.494500] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:33.429 [2024-12-12 20:31:17.494505] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:33.429 [2024-12-12 20:31:17.494510] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:33.429 [2024-12-12 20:31:17.494515] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:33.429 [2024-12-12 20:31:17.494521] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:33.429 [2024-12-12 20:31:17.494526] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:33.429 [2024-12-12 20:31:17.494532] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:33.429 [2024-12-12 20:31:17.494540] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:33.429 [2024-12-12 20:31:17.494545] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:33.429 [2024-12-12 20:31:17.494551] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:33.429 [2024-12-12 20:31:17.494557] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:33.429 [2024-12-12 20:31:17.494563] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:33.429 [2024-12-12 20:31:17.494568] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:33.429 [2024-12-12 20:31:17.494573] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:33.429 [2024-12-12 20:31:17.494578] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:33.429 [2024-12-12 20:31:17.494584] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:33.429 [2024-12-12 20:31:17.494591] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:33.429 [2024-12-12 20:31:17.494598] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:33.429 [2024-12-12 20:31:17.494604] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:33.429 [2024-12-12 20:31:17.494609] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:33.429 [2024-12-12 20:31:17.494615] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:33.429 [2024-12-12 20:31:17.494620] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:33.429 [2024-12-12 20:31:17.494626] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:33.429 [2024-12-12 20:31:17.494631] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:33.429 [2024-12-12 20:31:17.494636] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:33.429 [2024-12-12 20:31:17.494651] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:33.429 [2024-12-12 20:31:17.494656] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:33.429 [2024-12-12 20:31:17.494661] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:33.429 [2024-12-12 20:31:17.494667] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:33.429 [2024-12-12 20:31:17.494672] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:33.429 [2024-12-12 20:31:17.494678] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:33.429 [2024-12-12 20:31:17.494683] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:33.429 [2024-12-12 20:31:17.494690] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:33.429 [2024-12-12 20:31:17.494696] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:33.429 [2024-12-12 20:31:17.494701] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:33.429 [2024-12-12 20:31:17.494707] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:33.429 [2024-12-12 20:31:17.494712] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:33.429 [2024-12-12 20:31:17.494718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.429 [2024-12-12 20:31:17.494726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:33.429 [2024-12-12 20:31:17.494731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.498 ms 00:20:33.429 [2024-12-12 20:31:17.494736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.429 [2024-12-12 20:31:17.516341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.429 [2024-12-12 20:31:17.516370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:33.429 [2024-12-12 20:31:17.516378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.552 ms 00:20:33.429 [2024-12-12 20:31:17.516384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.429 [2024-12-12 20:31:17.516493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.429 [2024-12-12 20:31:17.516501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:33.429 [2024-12-12 20:31:17.516508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:20:33.429 [2024-12-12 20:31:17.516514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.429 [2024-12-12 20:31:17.553410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.429 [2024-12-12 20:31:17.553447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:33.429 [2024-12-12 20:31:17.553458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.877 ms 00:20:33.429 [2024-12-12 20:31:17.553464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.429 [2024-12-12 20:31:17.553523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.429 [2024-12-12 20:31:17.553533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:33.429 [2024-12-12 20:31:17.553539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:20:33.429 [2024-12-12 20:31:17.553545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.429 [2024-12-12 20:31:17.553837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.429 [2024-12-12 20:31:17.553849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:33.429 [2024-12-12 20:31:17.553856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.279 ms 00:20:33.429 [2024-12-12 20:31:17.553867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.429 [2024-12-12 20:31:17.553973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.429 [2024-12-12 20:31:17.553981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:33.429 [2024-12-12 20:31:17.553987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:20:33.429 [2024-12-12 20:31:17.553993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.429 [2024-12-12 20:31:17.565095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.429 [2024-12-12 20:31:17.565188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:33.429 [2024-12-12 20:31:17.565228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.085 ms 00:20:33.429 [2024-12-12 20:31:17.565247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.429 [2024-12-12 20:31:17.575148] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:33.429 [2024-12-12 20:31:17.575263] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:33.429 [2024-12-12 20:31:17.575344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.429 [2024-12-12 20:31:17.575360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:33.429 [2024-12-12 20:31:17.575375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.995 ms 00:20:33.429 [2024-12-12 20:31:17.575389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.429 [2024-12-12 20:31:17.594123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.429 [2024-12-12 20:31:17.594214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:33.429 [2024-12-12 20:31:17.594253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.663 ms 00:20:33.429 [2024-12-12 20:31:17.594269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.429 [2024-12-12 20:31:17.602889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.429 [2024-12-12 20:31:17.602974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:33.429 [2024-12-12 20:31:17.603014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.559 ms 00:20:33.429 [2024-12-12 20:31:17.603031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.429 [2024-12-12 20:31:17.611321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.429 [2024-12-12 20:31:17.611407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:33.429 [2024-12-12 20:31:17.611460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.244 ms 00:20:33.429 [2024-12-12 20:31:17.611477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.429 [2024-12-12 20:31:17.611945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.429 [2024-12-12 20:31:17.612016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:33.429 [2024-12-12 20:31:17.612055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.402 ms 00:20:33.429 [2024-12-12 20:31:17.612072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.688 [2024-12-12 20:31:17.656229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.688 [2024-12-12 20:31:17.656375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:33.688 [2024-12-12 20:31:17.656423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.125 ms 00:20:33.688 [2024-12-12 20:31:17.656441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.688 [2024-12-12 20:31:17.664531] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:33.688 [2024-12-12 20:31:17.676903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.688 [2024-12-12 20:31:17.677006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:33.689 [2024-12-12 20:31:17.677017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.390 ms 00:20:33.689 [2024-12-12 20:31:17.677028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.689 [2024-12-12 20:31:17.677099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.689 [2024-12-12 20:31:17.677107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:33.689 [2024-12-12 20:31:17.677114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:33.689 [2024-12-12 20:31:17.677120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.689 [2024-12-12 20:31:17.677158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.689 [2024-12-12 20:31:17.677165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:33.689 [2024-12-12 20:31:17.677171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:20:33.689 [2024-12-12 20:31:17.677179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.689 [2024-12-12 20:31:17.677204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.689 [2024-12-12 20:31:17.677210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:33.689 [2024-12-12 20:31:17.677216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:33.689 [2024-12-12 20:31:17.677222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.689 [2024-12-12 20:31:17.677247] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:33.689 [2024-12-12 20:31:17.677255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.689 [2024-12-12 20:31:17.677261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:33.689 [2024-12-12 20:31:17.677267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:33.689 [2024-12-12 20:31:17.677273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.689 [2024-12-12 20:31:17.695124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.689 [2024-12-12 20:31:17.695151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:33.689 [2024-12-12 20:31:17.695160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.830 ms 00:20:33.689 [2024-12-12 20:31:17.695166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.689 [2024-12-12 20:31:17.695232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.689 [2024-12-12 20:31:17.695241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:33.689 [2024-12-12 20:31:17.695248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:20:33.689 [2024-12-12 20:31:17.695253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.689 [2024-12-12 20:31:17.696075] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:33.689 [2024-12-12 20:31:17.698534] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 224.426 ms, result 0 00:20:33.689 [2024-12-12 20:31:17.699191] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:33.689 [2024-12-12 20:31:17.714181] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:34.623  [2024-12-12T20:31:19.785Z] Copying: 46/256 [MB] (46 MBps) [2024-12-12T20:31:21.159Z] Copying: 88/256 [MB] (42 MBps) [2024-12-12T20:31:22.093Z] Copying: 129/256 [MB] (40 MBps) [2024-12-12T20:31:23.027Z] Copying: 171/256 [MB] (42 MBps) [2024-12-12T20:31:23.962Z] Copying: 204/256 [MB] (32 MBps) [2024-12-12T20:31:24.542Z] Copying: 234/256 [MB] (30 MBps) [2024-12-12T20:31:24.800Z] Copying: 256/256 [MB] (average 39 MBps)[2024-12-12 20:31:24.672277] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:40.572 [2024-12-12 20:31:24.681503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.572 [2024-12-12 20:31:24.681539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:40.572 [2024-12-12 20:31:24.681556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:40.572 [2024-12-12 20:31:24.681564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.572 [2024-12-12 20:31:24.681586] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:40.572 [2024-12-12 20:31:24.684207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.572 [2024-12-12 20:31:24.684347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:40.572 [2024-12-12 20:31:24.684365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.608 ms 00:20:40.572 [2024-12-12 20:31:24.684375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.572 [2024-12-12 20:31:24.684686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.572 [2024-12-12 20:31:24.684698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:40.572 [2024-12-12 20:31:24.684708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.249 ms 00:20:40.572 [2024-12-12 20:31:24.684717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.572 [2024-12-12 20:31:24.688395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.572 [2024-12-12 20:31:24.688421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:40.572 [2024-12-12 20:31:24.688431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.658 ms 00:20:40.572 [2024-12-12 20:31:24.688439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.572 [2024-12-12 20:31:24.695260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.572 [2024-12-12 20:31:24.695369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:40.572 [2024-12-12 20:31:24.695384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.803 ms 00:20:40.572 [2024-12-12 20:31:24.695392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.572 [2024-12-12 20:31:24.719226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.572 [2024-12-12 20:31:24.719258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:40.572 [2024-12-12 20:31:24.719269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.761 ms 00:20:40.573 [2024-12-12 20:31:24.719277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.573 [2024-12-12 20:31:24.733508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.573 [2024-12-12 20:31:24.733635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:40.573 [2024-12-12 20:31:24.733657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.209 ms 00:20:40.573 [2024-12-12 20:31:24.733665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.573 [2024-12-12 20:31:24.733799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.573 [2024-12-12 20:31:24.733810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:40.573 [2024-12-12 20:31:24.733824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:20:40.573 [2024-12-12 20:31:24.733831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.573 [2024-12-12 20:31:24.759172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.573 [2024-12-12 20:31:24.759220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:40.573 [2024-12-12 20:31:24.759236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.323 ms 00:20:40.573 [2024-12-12 20:31:24.759248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.573 [2024-12-12 20:31:24.786312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.573 [2024-12-12 20:31:24.786346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:40.573 [2024-12-12 20:31:24.786356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.009 ms 00:20:40.573 [2024-12-12 20:31:24.786364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.833 [2024-12-12 20:31:24.810915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.833 [2024-12-12 20:31:24.810949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:40.833 [2024-12-12 20:31:24.810959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.516 ms 00:20:40.833 [2024-12-12 20:31:24.810966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.833 [2024-12-12 20:31:24.835090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.833 [2024-12-12 20:31:24.835122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:40.833 [2024-12-12 20:31:24.835133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.062 ms 00:20:40.833 [2024-12-12 20:31:24.835140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.833 [2024-12-12 20:31:24.835161] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:40.833 [2024-12-12 20:31:24.835175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:40.833 [2024-12-12 20:31:24.835185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:40.833 [2024-12-12 20:31:24.835193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:40.833 [2024-12-12 20:31:24.835201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:40.833 [2024-12-12 20:31:24.835209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:40.833 [2024-12-12 20:31:24.835216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:40.833 [2024-12-12 20:31:24.835224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:40.833 [2024-12-12 20:31:24.835232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:40.833 [2024-12-12 20:31:24.835239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:40.833 [2024-12-12 20:31:24.835246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:40.833 [2024-12-12 20:31:24.835254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:40.833 [2024-12-12 20:31:24.835261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:40.833 [2024-12-12 20:31:24.835269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:40.833 [2024-12-12 20:31:24.835276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:40.833 [2024-12-12 20:31:24.835284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:40.833 [2024-12-12 20:31:24.835291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:40.833 [2024-12-12 20:31:24.835298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:40.833 [2024-12-12 20:31:24.835305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:40.833 [2024-12-12 20:31:24.835312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:40.833 [2024-12-12 20:31:24.835320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:40.833 [2024-12-12 20:31:24.835327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:40.833 [2024-12-12 20:31:24.835334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:40.833 [2024-12-12 20:31:24.835341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:40.833 [2024-12-12 20:31:24.835348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:40.833 [2024-12-12 20:31:24.835355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:40.833 [2024-12-12 20:31:24.835362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:40.833 [2024-12-12 20:31:24.835372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:40.833 [2024-12-12 20:31:24.835379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:40.833 [2024-12-12 20:31:24.835386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:40.833 [2024-12-12 20:31:24.835395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:40.833 [2024-12-12 20:31:24.835403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:40.833 [2024-12-12 20:31:24.835410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:40.833 [2024-12-12 20:31:24.835436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:40.833 [2024-12-12 20:31:24.835444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:40.833 [2024-12-12 20:31:24.835452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:40.833 [2024-12-12 20:31:24.835459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:40.833 [2024-12-12 20:31:24.835466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:40.833 [2024-12-12 20:31:24.835494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:40.833 [2024-12-12 20:31:24.835505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:40.833 [2024-12-12 20:31:24.835518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:40.833 [2024-12-12 20:31:24.835526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:40.833 [2024-12-12 20:31:24.835533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:40.833 [2024-12-12 20:31:24.835540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:40.833 [2024-12-12 20:31:24.835548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:40.833 [2024-12-12 20:31:24.835555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:40.833 [2024-12-12 20:31:24.835563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:40.833 [2024-12-12 20:31:24.835570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:40.833 [2024-12-12 20:31:24.835577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:40.833 [2024-12-12 20:31:24.835584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:40.833 [2024-12-12 20:31:24.835592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:40.833 [2024-12-12 20:31:24.835599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:40.833 [2024-12-12 20:31:24.835607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:40.833 [2024-12-12 20:31:24.835614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:40.833 [2024-12-12 20:31:24.835621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:40.833 [2024-12-12 20:31:24.835629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:40.833 [2024-12-12 20:31:24.835637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:40.833 [2024-12-12 20:31:24.835644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:40.833 [2024-12-12 20:31:24.835652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:40.833 [2024-12-12 20:31:24.835660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:40.833 [2024-12-12 20:31:24.835667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:40.833 [2024-12-12 20:31:24.835674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:40.833 [2024-12-12 20:31:24.835682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:40.833 [2024-12-12 20:31:24.835689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:40.833 [2024-12-12 20:31:24.835696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:40.833 [2024-12-12 20:31:24.835703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:40.833 [2024-12-12 20:31:24.835710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:40.833 [2024-12-12 20:31:24.835718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:40.833 [2024-12-12 20:31:24.835725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:40.833 [2024-12-12 20:31:24.835732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:40.833 [2024-12-12 20:31:24.835739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:40.833 [2024-12-12 20:31:24.835746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:40.833 [2024-12-12 20:31:24.835754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:40.833 [2024-12-12 20:31:24.835761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:40.834 [2024-12-12 20:31:24.835768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:40.834 [2024-12-12 20:31:24.835775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:40.834 [2024-12-12 20:31:24.835783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:40.834 [2024-12-12 20:31:24.835790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:40.834 [2024-12-12 20:31:24.835797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:40.834 [2024-12-12 20:31:24.835804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:40.834 [2024-12-12 20:31:24.835811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:40.834 [2024-12-12 20:31:24.835818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:40.834 [2024-12-12 20:31:24.835825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:40.834 [2024-12-12 20:31:24.835833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:40.834 [2024-12-12 20:31:24.835840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:40.834 [2024-12-12 20:31:24.835848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:40.834 [2024-12-12 20:31:24.835855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:40.834 [2024-12-12 20:31:24.835862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:40.834 [2024-12-12 20:31:24.835869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:40.834 [2024-12-12 20:31:24.835876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:40.834 [2024-12-12 20:31:24.835883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:40.834 [2024-12-12 20:31:24.835890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:40.834 [2024-12-12 20:31:24.835897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:40.834 [2024-12-12 20:31:24.835905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:40.834 [2024-12-12 20:31:24.835921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:40.834 [2024-12-12 20:31:24.835928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:40.834 [2024-12-12 20:31:24.835936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:40.834 [2024-12-12 20:31:24.835943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:40.834 [2024-12-12 20:31:24.835950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:40.834 [2024-12-12 20:31:24.835958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:40.834 [2024-12-12 20:31:24.835965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:40.834 [2024-12-12 20:31:24.835981] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:40.834 [2024-12-12 20:31:24.835989] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6ce9d3d9-3dbe-48b1-9531-60bd0b669e2c 00:20:40.834 [2024-12-12 20:31:24.835997] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:40.834 [2024-12-12 20:31:24.836004] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:40.834 [2024-12-12 20:31:24.836011] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:40.834 [2024-12-12 20:31:24.836019] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:40.834 [2024-12-12 20:31:24.836025] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:40.834 [2024-12-12 20:31:24.836033] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:40.834 [2024-12-12 20:31:24.836042] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:40.834 [2024-12-12 20:31:24.836049] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:40.834 [2024-12-12 20:31:24.836055] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:40.834 [2024-12-12 20:31:24.836061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.834 [2024-12-12 20:31:24.836068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:40.834 [2024-12-12 20:31:24.836076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.901 ms 00:20:40.834 [2024-12-12 20:31:24.836083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.834 [2024-12-12 20:31:24.849852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.834 [2024-12-12 20:31:24.849881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:40.834 [2024-12-12 20:31:24.849891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.737 ms 00:20:40.834 [2024-12-12 20:31:24.849899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.834 [2024-12-12 20:31:24.850264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.834 [2024-12-12 20:31:24.850282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:40.834 [2024-12-12 20:31:24.850291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.320 ms 00:20:40.834 [2024-12-12 20:31:24.850298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.834 [2024-12-12 20:31:24.885337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:40.834 [2024-12-12 20:31:24.885369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:40.834 [2024-12-12 20:31:24.885379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:40.834 [2024-12-12 20:31:24.885391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.834 [2024-12-12 20:31:24.885491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:40.834 [2024-12-12 20:31:24.885500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:40.834 [2024-12-12 20:31:24.885508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:40.834 [2024-12-12 20:31:24.885516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.834 [2024-12-12 20:31:24.885554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:40.834 [2024-12-12 20:31:24.885563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:40.834 [2024-12-12 20:31:24.885571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:40.834 [2024-12-12 20:31:24.885578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.834 [2024-12-12 20:31:24.885599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:40.834 [2024-12-12 20:31:24.885607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:40.834 [2024-12-12 20:31:24.885615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:40.834 [2024-12-12 20:31:24.885622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.834 [2024-12-12 20:31:24.963111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:40.834 [2024-12-12 20:31:24.963147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:40.834 [2024-12-12 20:31:24.963158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:40.834 [2024-12-12 20:31:24.963165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.834 [2024-12-12 20:31:25.027138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:40.834 [2024-12-12 20:31:25.027176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:40.834 [2024-12-12 20:31:25.027187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:40.834 [2024-12-12 20:31:25.027194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.834 [2024-12-12 20:31:25.027259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:40.834 [2024-12-12 20:31:25.027268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:40.834 [2024-12-12 20:31:25.027276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:40.834 [2024-12-12 20:31:25.027283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.834 [2024-12-12 20:31:25.027311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:40.834 [2024-12-12 20:31:25.027322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:40.834 [2024-12-12 20:31:25.027329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:40.834 [2024-12-12 20:31:25.027336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.834 [2024-12-12 20:31:25.027520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:40.834 [2024-12-12 20:31:25.027531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:40.834 [2024-12-12 20:31:25.027539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:40.834 [2024-12-12 20:31:25.027546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.834 [2024-12-12 20:31:25.027576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:40.834 [2024-12-12 20:31:25.027585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:40.834 [2024-12-12 20:31:25.027595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:40.834 [2024-12-12 20:31:25.027603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.834 [2024-12-12 20:31:25.027635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:40.834 [2024-12-12 20:31:25.027644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:40.834 [2024-12-12 20:31:25.027651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:40.834 [2024-12-12 20:31:25.027658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.834 [2024-12-12 20:31:25.027695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:40.834 [2024-12-12 20:31:25.027707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:40.834 [2024-12-12 20:31:25.027715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:40.834 [2024-12-12 20:31:25.027722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.834 [2024-12-12 20:31:25.027845] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 346.332 ms, result 0 00:20:41.769 00:20:41.769 00:20:41.769 20:31:25 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:42.336 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:20:42.336 20:31:26 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:20:42.336 20:31:26 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:20:42.336 20:31:26 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:42.336 20:31:26 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:42.336 20:31:26 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:20:42.336 20:31:26 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:42.336 Process with pid 78517 is not found 00:20:42.336 20:31:26 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 78517 00:20:42.336 20:31:26 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78517 ']' 00:20:42.336 20:31:26 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78517 00:20:42.336 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (78517) - No such process 00:20:42.336 20:31:26 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 78517 is not found' 00:20:42.336 ************************************ 00:20:42.336 END TEST ftl_trim 00:20:42.336 ************************************ 00:20:42.336 00:20:42.336 real 0m52.129s 00:20:42.336 user 1m15.789s 00:20:42.336 sys 0m5.204s 00:20:42.336 20:31:26 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:42.336 20:31:26 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:42.336 20:31:26 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:20:42.336 20:31:26 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:42.336 20:31:26 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:42.336 20:31:26 ftl -- common/autotest_common.sh@10 -- # set +x 00:20:42.336 ************************************ 00:20:42.336 START TEST ftl_restore 00:20:42.336 ************************************ 00:20:42.336 20:31:26 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:20:42.336 * Looking for test storage... 00:20:42.336 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:20:42.336 20:31:26 ftl.ftl_restore -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:42.336 20:31:26 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lcov --version 00:20:42.336 20:31:26 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:42.336 20:31:26 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:42.336 20:31:26 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:42.336 20:31:26 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:42.336 20:31:26 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:42.336 20:31:26 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:20:42.336 20:31:26 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:20:42.336 20:31:26 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:20:42.336 20:31:26 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:20:42.336 20:31:26 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:20:42.336 20:31:26 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:20:42.336 20:31:26 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:20:42.336 20:31:26 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:42.336 20:31:26 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:20:42.336 20:31:26 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:20:42.336 20:31:26 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:42.336 20:31:26 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:42.336 20:31:26 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:20:42.336 20:31:26 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:20:42.336 20:31:26 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:42.336 20:31:26 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:20:42.336 20:31:26 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:20:42.336 20:31:26 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:20:42.336 20:31:26 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:20:42.336 20:31:26 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:42.336 20:31:26 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:20:42.336 20:31:26 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:20:42.336 20:31:26 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:42.336 20:31:26 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:42.336 20:31:26 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:20:42.336 20:31:26 ftl.ftl_restore -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:42.336 20:31:26 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:42.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.336 --rc genhtml_branch_coverage=1 00:20:42.336 --rc genhtml_function_coverage=1 00:20:42.336 --rc genhtml_legend=1 00:20:42.336 --rc geninfo_all_blocks=1 00:20:42.336 --rc geninfo_unexecuted_blocks=1 00:20:42.336 00:20:42.336 ' 00:20:42.336 20:31:26 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:42.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.336 --rc genhtml_branch_coverage=1 00:20:42.336 --rc genhtml_function_coverage=1 00:20:42.336 --rc genhtml_legend=1 00:20:42.336 --rc geninfo_all_blocks=1 00:20:42.336 --rc geninfo_unexecuted_blocks=1 00:20:42.336 00:20:42.336 ' 00:20:42.336 20:31:26 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:42.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.336 --rc genhtml_branch_coverage=1 00:20:42.336 --rc genhtml_function_coverage=1 00:20:42.336 --rc genhtml_legend=1 00:20:42.336 --rc geninfo_all_blocks=1 00:20:42.336 --rc geninfo_unexecuted_blocks=1 00:20:42.336 00:20:42.336 ' 00:20:42.336 20:31:26 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:42.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.336 --rc genhtml_branch_coverage=1 00:20:42.336 --rc genhtml_function_coverage=1 00:20:42.336 --rc genhtml_legend=1 00:20:42.336 --rc geninfo_all_blocks=1 00:20:42.336 --rc geninfo_unexecuted_blocks=1 00:20:42.336 00:20:42.336 ' 00:20:42.336 20:31:26 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:20:42.336 20:31:26 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:20:42.336 20:31:26 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:20:42.336 20:31:26 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:20:42.336 20:31:26 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:20:42.336 20:31:26 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:42.336 20:31:26 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:42.336 20:31:26 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:20:42.336 20:31:26 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:20:42.336 20:31:26 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:42.336 20:31:26 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:42.336 20:31:26 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:20:42.336 20:31:26 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:20:42.336 20:31:26 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:42.336 20:31:26 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:42.336 20:31:26 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:20:42.336 20:31:26 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:20:42.336 20:31:26 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:42.336 20:31:26 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:42.336 20:31:26 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:20:42.336 20:31:26 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:20:42.336 20:31:26 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:42.336 20:31:26 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:42.336 20:31:26 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:42.336 20:31:26 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:42.336 20:31:26 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:20:42.336 20:31:26 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:20:42.336 20:31:26 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:42.336 20:31:26 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:42.336 20:31:26 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:42.595 20:31:26 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:20:42.595 20:31:26 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.CX67pMp4gi 00:20:42.595 20:31:26 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:20:42.595 20:31:26 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:20:42.595 20:31:26 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:20:42.595 20:31:26 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:20:42.595 20:31:26 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:20:42.595 20:31:26 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:20:42.595 20:31:26 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:20:42.595 20:31:26 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:20:42.595 20:31:26 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=78736 00:20:42.595 20:31:26 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 78736 00:20:42.595 20:31:26 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 78736 ']' 00:20:42.595 20:31:26 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:42.595 20:31:26 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:42.595 20:31:26 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:42.595 20:31:26 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:42.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:42.595 20:31:26 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:42.595 20:31:26 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:20:42.595 [2024-12-12 20:31:26.648391] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:20:42.595 [2024-12-12 20:31:26.648646] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78736 ] 00:20:42.595 [2024-12-12 20:31:26.804051] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:42.853 [2024-12-12 20:31:26.886135] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:43.419 20:31:27 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:43.419 20:31:27 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:20:43.419 20:31:27 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:20:43.419 20:31:27 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:20:43.419 20:31:27 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:20:43.419 20:31:27 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:20:43.419 20:31:27 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:20:43.419 20:31:27 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:20:43.677 20:31:27 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:20:43.677 20:31:27 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:20:43.677 20:31:27 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:20:43.677 20:31:27 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:20:43.677 20:31:27 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:43.677 20:31:27 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:43.677 20:31:27 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:43.677 20:31:27 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:20:43.677 20:31:27 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:43.677 { 00:20:43.677 "name": "nvme0n1", 00:20:43.677 "aliases": [ 00:20:43.677 "951f4146-9730-474c-a065-bdcac4999d22" 00:20:43.677 ], 00:20:43.677 "product_name": "NVMe disk", 00:20:43.677 "block_size": 4096, 00:20:43.677 "num_blocks": 1310720, 00:20:43.677 "uuid": "951f4146-9730-474c-a065-bdcac4999d22", 00:20:43.677 "numa_id": -1, 00:20:43.677 "assigned_rate_limits": { 00:20:43.677 "rw_ios_per_sec": 0, 00:20:43.677 "rw_mbytes_per_sec": 0, 00:20:43.677 "r_mbytes_per_sec": 0, 00:20:43.677 "w_mbytes_per_sec": 0 00:20:43.677 }, 00:20:43.677 "claimed": true, 00:20:43.677 "claim_type": "read_many_write_one", 00:20:43.677 "zoned": false, 00:20:43.677 "supported_io_types": { 00:20:43.677 "read": true, 00:20:43.677 "write": true, 00:20:43.677 "unmap": true, 00:20:43.677 "flush": true, 00:20:43.677 "reset": true, 00:20:43.677 "nvme_admin": true, 00:20:43.677 "nvme_io": true, 00:20:43.677 "nvme_io_md": false, 00:20:43.677 "write_zeroes": true, 00:20:43.677 "zcopy": false, 00:20:43.677 "get_zone_info": false, 00:20:43.677 "zone_management": false, 00:20:43.677 "zone_append": false, 00:20:43.677 "compare": true, 00:20:43.677 "compare_and_write": false, 00:20:43.677 "abort": true, 00:20:43.677 "seek_hole": false, 00:20:43.677 "seek_data": false, 00:20:43.677 "copy": true, 00:20:43.677 "nvme_iov_md": false 00:20:43.677 }, 00:20:43.677 "driver_specific": { 00:20:43.677 "nvme": [ 00:20:43.677 { 00:20:43.677 "pci_address": "0000:00:11.0", 00:20:43.677 "trid": { 00:20:43.677 "trtype": "PCIe", 00:20:43.677 "traddr": "0000:00:11.0" 00:20:43.677 }, 00:20:43.677 "ctrlr_data": { 00:20:43.677 "cntlid": 0, 00:20:43.677 "vendor_id": "0x1b36", 00:20:43.677 "model_number": "QEMU NVMe Ctrl", 00:20:43.677 "serial_number": "12341", 00:20:43.677 "firmware_revision": "8.0.0", 00:20:43.677 "subnqn": "nqn.2019-08.org.qemu:12341", 00:20:43.677 "oacs": { 00:20:43.677 "security": 0, 00:20:43.677 "format": 1, 00:20:43.677 "firmware": 0, 00:20:43.677 "ns_manage": 1 00:20:43.677 }, 00:20:43.677 "multi_ctrlr": false, 00:20:43.677 "ana_reporting": false 00:20:43.677 }, 00:20:43.677 "vs": { 00:20:43.677 "nvme_version": "1.4" 00:20:43.677 }, 00:20:43.677 "ns_data": { 00:20:43.677 "id": 1, 00:20:43.677 "can_share": false 00:20:43.677 } 00:20:43.677 } 00:20:43.677 ], 00:20:43.677 "mp_policy": "active_passive" 00:20:43.677 } 00:20:43.677 } 00:20:43.677 ]' 00:20:43.677 20:31:27 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:43.677 20:31:27 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:43.677 20:31:27 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:43.935 20:31:27 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:20:43.935 20:31:27 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:20:43.935 20:31:27 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:20:43.935 20:31:27 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:20:43.935 20:31:27 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:20:43.935 20:31:27 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:20:43.935 20:31:27 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:43.935 20:31:27 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:20:44.193 20:31:28 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=a3bca995-15b7-4faf-9bf2-5819fa85d706 00:20:44.193 20:31:28 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:20:44.193 20:31:28 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a3bca995-15b7-4faf-9bf2-5819fa85d706 00:20:44.193 20:31:28 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:20:44.451 20:31:28 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=d195da4a-5e60-4998-8a52-3b3039688ba5 00:20:44.451 20:31:28 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u d195da4a-5e60-4998-8a52-3b3039688ba5 00:20:44.710 20:31:28 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=34ef7ab1-6903-449d-a349-73fac656903f 00:20:44.710 20:31:28 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:20:44.710 20:31:28 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 34ef7ab1-6903-449d-a349-73fac656903f 00:20:44.710 20:31:28 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:20:44.710 20:31:28 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:20:44.710 20:31:28 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=34ef7ab1-6903-449d-a349-73fac656903f 00:20:44.710 20:31:28 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:20:44.710 20:31:28 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 34ef7ab1-6903-449d-a349-73fac656903f 00:20:44.710 20:31:28 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=34ef7ab1-6903-449d-a349-73fac656903f 00:20:44.710 20:31:28 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:44.710 20:31:28 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:44.710 20:31:28 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:44.710 20:31:28 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 34ef7ab1-6903-449d-a349-73fac656903f 00:20:44.969 20:31:29 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:44.969 { 00:20:44.969 "name": "34ef7ab1-6903-449d-a349-73fac656903f", 00:20:44.969 "aliases": [ 00:20:44.969 "lvs/nvme0n1p0" 00:20:44.969 ], 00:20:44.969 "product_name": "Logical Volume", 00:20:44.969 "block_size": 4096, 00:20:44.969 "num_blocks": 26476544, 00:20:44.969 "uuid": "34ef7ab1-6903-449d-a349-73fac656903f", 00:20:44.969 "assigned_rate_limits": { 00:20:44.969 "rw_ios_per_sec": 0, 00:20:44.969 "rw_mbytes_per_sec": 0, 00:20:44.969 "r_mbytes_per_sec": 0, 00:20:44.969 "w_mbytes_per_sec": 0 00:20:44.969 }, 00:20:44.969 "claimed": false, 00:20:44.969 "zoned": false, 00:20:44.969 "supported_io_types": { 00:20:44.969 "read": true, 00:20:44.969 "write": true, 00:20:44.969 "unmap": true, 00:20:44.969 "flush": false, 00:20:44.969 "reset": true, 00:20:44.969 "nvme_admin": false, 00:20:44.969 "nvme_io": false, 00:20:44.969 "nvme_io_md": false, 00:20:44.969 "write_zeroes": true, 00:20:44.969 "zcopy": false, 00:20:44.969 "get_zone_info": false, 00:20:44.969 "zone_management": false, 00:20:44.969 "zone_append": false, 00:20:44.969 "compare": false, 00:20:44.969 "compare_and_write": false, 00:20:44.969 "abort": false, 00:20:44.969 "seek_hole": true, 00:20:44.969 "seek_data": true, 00:20:44.969 "copy": false, 00:20:44.969 "nvme_iov_md": false 00:20:44.969 }, 00:20:44.969 "driver_specific": { 00:20:44.969 "lvol": { 00:20:44.969 "lvol_store_uuid": "d195da4a-5e60-4998-8a52-3b3039688ba5", 00:20:44.969 "base_bdev": "nvme0n1", 00:20:44.969 "thin_provision": true, 00:20:44.969 "num_allocated_clusters": 0, 00:20:44.969 "snapshot": false, 00:20:44.969 "clone": false, 00:20:44.969 "esnap_clone": false 00:20:44.969 } 00:20:44.969 } 00:20:44.969 } 00:20:44.969 ]' 00:20:44.969 20:31:29 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:44.969 20:31:29 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:44.969 20:31:29 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:44.969 20:31:29 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:44.969 20:31:29 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:44.969 20:31:29 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:20:44.969 20:31:29 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:20:44.969 20:31:29 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:20:44.969 20:31:29 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:20:45.227 20:31:29 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:20:45.227 20:31:29 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:20:45.227 20:31:29 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 34ef7ab1-6903-449d-a349-73fac656903f 00:20:45.227 20:31:29 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=34ef7ab1-6903-449d-a349-73fac656903f 00:20:45.227 20:31:29 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:45.227 20:31:29 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:45.227 20:31:29 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:45.227 20:31:29 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 34ef7ab1-6903-449d-a349-73fac656903f 00:20:45.486 20:31:29 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:45.486 { 00:20:45.486 "name": "34ef7ab1-6903-449d-a349-73fac656903f", 00:20:45.486 "aliases": [ 00:20:45.486 "lvs/nvme0n1p0" 00:20:45.486 ], 00:20:45.486 "product_name": "Logical Volume", 00:20:45.486 "block_size": 4096, 00:20:45.486 "num_blocks": 26476544, 00:20:45.486 "uuid": "34ef7ab1-6903-449d-a349-73fac656903f", 00:20:45.486 "assigned_rate_limits": { 00:20:45.486 "rw_ios_per_sec": 0, 00:20:45.486 "rw_mbytes_per_sec": 0, 00:20:45.486 "r_mbytes_per_sec": 0, 00:20:45.486 "w_mbytes_per_sec": 0 00:20:45.486 }, 00:20:45.486 "claimed": false, 00:20:45.486 "zoned": false, 00:20:45.486 "supported_io_types": { 00:20:45.486 "read": true, 00:20:45.486 "write": true, 00:20:45.486 "unmap": true, 00:20:45.486 "flush": false, 00:20:45.486 "reset": true, 00:20:45.486 "nvme_admin": false, 00:20:45.486 "nvme_io": false, 00:20:45.486 "nvme_io_md": false, 00:20:45.486 "write_zeroes": true, 00:20:45.486 "zcopy": false, 00:20:45.486 "get_zone_info": false, 00:20:45.486 "zone_management": false, 00:20:45.486 "zone_append": false, 00:20:45.486 "compare": false, 00:20:45.486 "compare_and_write": false, 00:20:45.486 "abort": false, 00:20:45.486 "seek_hole": true, 00:20:45.486 "seek_data": true, 00:20:45.486 "copy": false, 00:20:45.486 "nvme_iov_md": false 00:20:45.486 }, 00:20:45.486 "driver_specific": { 00:20:45.486 "lvol": { 00:20:45.486 "lvol_store_uuid": "d195da4a-5e60-4998-8a52-3b3039688ba5", 00:20:45.486 "base_bdev": "nvme0n1", 00:20:45.486 "thin_provision": true, 00:20:45.486 "num_allocated_clusters": 0, 00:20:45.486 "snapshot": false, 00:20:45.486 "clone": false, 00:20:45.486 "esnap_clone": false 00:20:45.486 } 00:20:45.486 } 00:20:45.486 } 00:20:45.486 ]' 00:20:45.486 20:31:29 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:45.486 20:31:29 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:45.486 20:31:29 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:45.486 20:31:29 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:45.486 20:31:29 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:45.486 20:31:29 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:20:45.486 20:31:29 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:20:45.486 20:31:29 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:20:45.744 20:31:29 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:20:45.744 20:31:29 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 34ef7ab1-6903-449d-a349-73fac656903f 00:20:45.744 20:31:29 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=34ef7ab1-6903-449d-a349-73fac656903f 00:20:45.744 20:31:29 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:45.744 20:31:29 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:45.744 20:31:29 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:45.744 20:31:29 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 34ef7ab1-6903-449d-a349-73fac656903f 00:20:46.002 20:31:29 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:46.002 { 00:20:46.002 "name": "34ef7ab1-6903-449d-a349-73fac656903f", 00:20:46.002 "aliases": [ 00:20:46.002 "lvs/nvme0n1p0" 00:20:46.002 ], 00:20:46.002 "product_name": "Logical Volume", 00:20:46.002 "block_size": 4096, 00:20:46.002 "num_blocks": 26476544, 00:20:46.002 "uuid": "34ef7ab1-6903-449d-a349-73fac656903f", 00:20:46.002 "assigned_rate_limits": { 00:20:46.002 "rw_ios_per_sec": 0, 00:20:46.002 "rw_mbytes_per_sec": 0, 00:20:46.002 "r_mbytes_per_sec": 0, 00:20:46.002 "w_mbytes_per_sec": 0 00:20:46.002 }, 00:20:46.002 "claimed": false, 00:20:46.002 "zoned": false, 00:20:46.002 "supported_io_types": { 00:20:46.002 "read": true, 00:20:46.002 "write": true, 00:20:46.002 "unmap": true, 00:20:46.002 "flush": false, 00:20:46.002 "reset": true, 00:20:46.002 "nvme_admin": false, 00:20:46.002 "nvme_io": false, 00:20:46.002 "nvme_io_md": false, 00:20:46.002 "write_zeroes": true, 00:20:46.002 "zcopy": false, 00:20:46.002 "get_zone_info": false, 00:20:46.002 "zone_management": false, 00:20:46.002 "zone_append": false, 00:20:46.002 "compare": false, 00:20:46.002 "compare_and_write": false, 00:20:46.002 "abort": false, 00:20:46.002 "seek_hole": true, 00:20:46.002 "seek_data": true, 00:20:46.002 "copy": false, 00:20:46.002 "nvme_iov_md": false 00:20:46.002 }, 00:20:46.002 "driver_specific": { 00:20:46.002 "lvol": { 00:20:46.002 "lvol_store_uuid": "d195da4a-5e60-4998-8a52-3b3039688ba5", 00:20:46.002 "base_bdev": "nvme0n1", 00:20:46.002 "thin_provision": true, 00:20:46.002 "num_allocated_clusters": 0, 00:20:46.002 "snapshot": false, 00:20:46.002 "clone": false, 00:20:46.002 "esnap_clone": false 00:20:46.002 } 00:20:46.002 } 00:20:46.002 } 00:20:46.002 ]' 00:20:46.002 20:31:29 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:46.002 20:31:30 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:46.002 20:31:30 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:46.002 20:31:30 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:46.002 20:31:30 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:46.002 20:31:30 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:20:46.002 20:31:30 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:20:46.002 20:31:30 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 34ef7ab1-6903-449d-a349-73fac656903f --l2p_dram_limit 10' 00:20:46.002 20:31:30 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:20:46.002 20:31:30 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:20:46.002 20:31:30 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:20:46.002 20:31:30 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:20:46.002 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:20:46.002 20:31:30 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 34ef7ab1-6903-449d-a349-73fac656903f --l2p_dram_limit 10 -c nvc0n1p0 00:20:46.261 [2024-12-12 20:31:30.235805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.261 [2024-12-12 20:31:30.235849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:46.261 [2024-12-12 20:31:30.235863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:46.261 [2024-12-12 20:31:30.235870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.261 [2024-12-12 20:31:30.235916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.261 [2024-12-12 20:31:30.235924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:46.261 [2024-12-12 20:31:30.235931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:20:46.261 [2024-12-12 20:31:30.235937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.262 [2024-12-12 20:31:30.235957] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:46.262 [2024-12-12 20:31:30.236574] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:46.262 [2024-12-12 20:31:30.236590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.262 [2024-12-12 20:31:30.236597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:46.262 [2024-12-12 20:31:30.236604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.638 ms 00:20:46.262 [2024-12-12 20:31:30.236611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.262 [2024-12-12 20:31:30.236635] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 9c27c25e-6895-4077-a8c7-dd2dac7fe71c 00:20:46.262 [2024-12-12 20:31:30.237578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.262 [2024-12-12 20:31:30.237686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:20:46.262 [2024-12-12 20:31:30.237698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:20:46.262 [2024-12-12 20:31:30.237706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.262 [2024-12-12 20:31:30.242426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.262 [2024-12-12 20:31:30.242449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:46.262 [2024-12-12 20:31:30.242457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.659 ms 00:20:46.262 [2024-12-12 20:31:30.242464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.262 [2024-12-12 20:31:30.242529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.262 [2024-12-12 20:31:30.242538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:46.262 [2024-12-12 20:31:30.242544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:20:46.262 [2024-12-12 20:31:30.242554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.262 [2024-12-12 20:31:30.242609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.262 [2024-12-12 20:31:30.242618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:46.262 [2024-12-12 20:31:30.242624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:46.262 [2024-12-12 20:31:30.242633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.262 [2024-12-12 20:31:30.242648] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:46.262 [2024-12-12 20:31:30.245578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.262 [2024-12-12 20:31:30.245600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:46.262 [2024-12-12 20:31:30.245610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.931 ms 00:20:46.262 [2024-12-12 20:31:30.245616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.262 [2024-12-12 20:31:30.245644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.262 [2024-12-12 20:31:30.245650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:46.262 [2024-12-12 20:31:30.245658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:46.262 [2024-12-12 20:31:30.245663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.262 [2024-12-12 20:31:30.245677] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:20:46.262 [2024-12-12 20:31:30.245786] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:46.262 [2024-12-12 20:31:30.245797] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:46.262 [2024-12-12 20:31:30.245806] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:46.262 [2024-12-12 20:31:30.245816] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:46.262 [2024-12-12 20:31:30.245822] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:46.262 [2024-12-12 20:31:30.245830] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:46.262 [2024-12-12 20:31:30.245835] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:46.262 [2024-12-12 20:31:30.245845] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:46.262 [2024-12-12 20:31:30.245851] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:46.262 [2024-12-12 20:31:30.245857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.262 [2024-12-12 20:31:30.245867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:46.262 [2024-12-12 20:31:30.245874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.181 ms 00:20:46.262 [2024-12-12 20:31:30.245879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.262 [2024-12-12 20:31:30.245946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.262 [2024-12-12 20:31:30.245952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:46.262 [2024-12-12 20:31:30.245959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:20:46.262 [2024-12-12 20:31:30.245965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.262 [2024-12-12 20:31:30.246039] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:46.262 [2024-12-12 20:31:30.246046] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:46.262 [2024-12-12 20:31:30.246054] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:46.262 [2024-12-12 20:31:30.246060] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:46.262 [2024-12-12 20:31:30.246068] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:46.262 [2024-12-12 20:31:30.246073] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:46.262 [2024-12-12 20:31:30.246079] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:46.262 [2024-12-12 20:31:30.246084] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:46.262 [2024-12-12 20:31:30.246091] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:46.262 [2024-12-12 20:31:30.246096] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:46.262 [2024-12-12 20:31:30.246102] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:46.262 [2024-12-12 20:31:30.246108] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:46.262 [2024-12-12 20:31:30.246115] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:46.262 [2024-12-12 20:31:30.246120] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:46.262 [2024-12-12 20:31:30.246127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:46.262 [2024-12-12 20:31:30.246132] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:46.262 [2024-12-12 20:31:30.246140] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:46.262 [2024-12-12 20:31:30.246146] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:46.262 [2024-12-12 20:31:30.246152] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:46.262 [2024-12-12 20:31:30.246157] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:46.262 [2024-12-12 20:31:30.246163] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:46.262 [2024-12-12 20:31:30.246168] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:46.262 [2024-12-12 20:31:30.246174] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:46.262 [2024-12-12 20:31:30.246179] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:46.262 [2024-12-12 20:31:30.246185] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:46.262 [2024-12-12 20:31:30.246190] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:46.262 [2024-12-12 20:31:30.246196] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:46.262 [2024-12-12 20:31:30.246201] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:46.262 [2024-12-12 20:31:30.246207] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:46.262 [2024-12-12 20:31:30.246212] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:46.262 [2024-12-12 20:31:30.246218] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:46.262 [2024-12-12 20:31:30.246223] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:46.262 [2024-12-12 20:31:30.246230] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:46.262 [2024-12-12 20:31:30.246235] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:46.262 [2024-12-12 20:31:30.246241] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:46.262 [2024-12-12 20:31:30.246246] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:46.262 [2024-12-12 20:31:30.246252] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:46.262 [2024-12-12 20:31:30.246257] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:46.262 [2024-12-12 20:31:30.246265] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:46.262 [2024-12-12 20:31:30.246269] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:46.262 [2024-12-12 20:31:30.246276] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:46.262 [2024-12-12 20:31:30.246280] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:46.262 [2024-12-12 20:31:30.246287] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:46.262 [2024-12-12 20:31:30.246292] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:46.262 [2024-12-12 20:31:30.246298] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:46.262 [2024-12-12 20:31:30.246304] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:46.262 [2024-12-12 20:31:30.246311] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:46.262 [2024-12-12 20:31:30.246318] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:46.262 [2024-12-12 20:31:30.246326] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:46.262 [2024-12-12 20:31:30.246331] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:46.262 [2024-12-12 20:31:30.246338] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:46.262 [2024-12-12 20:31:30.246343] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:46.262 [2024-12-12 20:31:30.246349] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:46.263 [2024-12-12 20:31:30.246355] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:46.263 [2024-12-12 20:31:30.246363] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:46.263 [2024-12-12 20:31:30.246371] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:46.263 [2024-12-12 20:31:30.246378] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:46.263 [2024-12-12 20:31:30.246384] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:46.263 [2024-12-12 20:31:30.246390] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:46.263 [2024-12-12 20:31:30.246396] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:46.263 [2024-12-12 20:31:30.246403] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:46.263 [2024-12-12 20:31:30.246408] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:46.263 [2024-12-12 20:31:30.246435] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:46.263 [2024-12-12 20:31:30.246441] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:46.263 [2024-12-12 20:31:30.246451] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:46.263 [2024-12-12 20:31:30.246456] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:46.263 [2024-12-12 20:31:30.246463] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:46.263 [2024-12-12 20:31:30.246468] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:46.263 [2024-12-12 20:31:30.246477] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:46.263 [2024-12-12 20:31:30.246482] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:46.263 [2024-12-12 20:31:30.246490] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:46.263 [2024-12-12 20:31:30.246496] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:46.263 [2024-12-12 20:31:30.246503] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:46.263 [2024-12-12 20:31:30.246508] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:46.263 [2024-12-12 20:31:30.246515] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:46.263 [2024-12-12 20:31:30.246520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.263 [2024-12-12 20:31:30.246528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:46.263 [2024-12-12 20:31:30.246533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.534 ms 00:20:46.263 [2024-12-12 20:31:30.246540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.263 [2024-12-12 20:31:30.246580] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:20:46.263 [2024-12-12 20:31:30.246591] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:20:48.163 [2024-12-12 20:31:32.232178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.163 [2024-12-12 20:31:32.232384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:20:48.163 [2024-12-12 20:31:32.232407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1985.588 ms 00:20:48.163 [2024-12-12 20:31:32.232430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.163 [2024-12-12 20:31:32.257862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.163 [2024-12-12 20:31:32.257908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:48.163 [2024-12-12 20:31:32.257921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.233 ms 00:20:48.163 [2024-12-12 20:31:32.257931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.163 [2024-12-12 20:31:32.258054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.163 [2024-12-12 20:31:32.258067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:48.163 [2024-12-12 20:31:32.258075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:20:48.163 [2024-12-12 20:31:32.258088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.163 [2024-12-12 20:31:32.288049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.163 [2024-12-12 20:31:32.288085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:48.163 [2024-12-12 20:31:32.288095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.916 ms 00:20:48.163 [2024-12-12 20:31:32.288105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.163 [2024-12-12 20:31:32.288132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.163 [2024-12-12 20:31:32.288146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:48.163 [2024-12-12 20:31:32.288153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:48.164 [2024-12-12 20:31:32.288168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.164 [2024-12-12 20:31:32.288518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.164 [2024-12-12 20:31:32.288536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:48.164 [2024-12-12 20:31:32.288544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.308 ms 00:20:48.164 [2024-12-12 20:31:32.288553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.164 [2024-12-12 20:31:32.288653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.164 [2024-12-12 20:31:32.288663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:48.164 [2024-12-12 20:31:32.288673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:20:48.164 [2024-12-12 20:31:32.288683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.164 [2024-12-12 20:31:32.302578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.164 [2024-12-12 20:31:32.302715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:48.164 [2024-12-12 20:31:32.302731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.878 ms 00:20:48.164 [2024-12-12 20:31:32.302740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.164 [2024-12-12 20:31:32.329259] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:48.164 [2024-12-12 20:31:32.332018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.164 [2024-12-12 20:31:32.332050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:48.164 [2024-12-12 20:31:32.332064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.208 ms 00:20:48.164 [2024-12-12 20:31:32.332073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.164 [2024-12-12 20:31:32.385433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.164 [2024-12-12 20:31:32.385470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:20:48.164 [2024-12-12 20:31:32.385484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.324 ms 00:20:48.164 [2024-12-12 20:31:32.385492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.164 [2024-12-12 20:31:32.385663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.164 [2024-12-12 20:31:32.385676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:48.164 [2024-12-12 20:31:32.385688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.134 ms 00:20:48.164 [2024-12-12 20:31:32.385695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.422 [2024-12-12 20:31:32.409099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.422 [2024-12-12 20:31:32.409129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:20:48.422 [2024-12-12 20:31:32.409142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.359 ms 00:20:48.422 [2024-12-12 20:31:32.409150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.422 [2024-12-12 20:31:32.431559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.422 [2024-12-12 20:31:32.431695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:20:48.422 [2024-12-12 20:31:32.431715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.368 ms 00:20:48.422 [2024-12-12 20:31:32.431722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.422 [2024-12-12 20:31:32.432270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.423 [2024-12-12 20:31:32.432287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:48.423 [2024-12-12 20:31:32.432297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.516 ms 00:20:48.423 [2024-12-12 20:31:32.432307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.423 [2024-12-12 20:31:32.505308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.423 [2024-12-12 20:31:32.505349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:20:48.423 [2024-12-12 20:31:32.505367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.964 ms 00:20:48.423 [2024-12-12 20:31:32.505375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.423 [2024-12-12 20:31:32.529070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.423 [2024-12-12 20:31:32.529106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:20:48.423 [2024-12-12 20:31:32.529120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.606 ms 00:20:48.423 [2024-12-12 20:31:32.529128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.423 [2024-12-12 20:31:32.552272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.423 [2024-12-12 20:31:32.552317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:20:48.423 [2024-12-12 20:31:32.552329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.106 ms 00:20:48.423 [2024-12-12 20:31:32.552335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.423 [2024-12-12 20:31:32.574890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.423 [2024-12-12 20:31:32.574925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:48.423 [2024-12-12 20:31:32.574938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.518 ms 00:20:48.423 [2024-12-12 20:31:32.574945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.423 [2024-12-12 20:31:32.574983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.423 [2024-12-12 20:31:32.574992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:48.423 [2024-12-12 20:31:32.575004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:48.423 [2024-12-12 20:31:32.575012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.423 [2024-12-12 20:31:32.575100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.423 [2024-12-12 20:31:32.575112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:48.423 [2024-12-12 20:31:32.575122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:20:48.423 [2024-12-12 20:31:32.575129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.423 [2024-12-12 20:31:32.576023] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2339.823 ms, result 0 00:20:48.423 { 00:20:48.423 "name": "ftl0", 00:20:48.423 "uuid": "9c27c25e-6895-4077-a8c7-dd2dac7fe71c" 00:20:48.423 } 00:20:48.423 20:31:32 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:20:48.423 20:31:32 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:20:48.681 20:31:32 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:20:48.681 20:31:32 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:20:48.941 [2024-12-12 20:31:32.995581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.941 [2024-12-12 20:31:32.995632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:48.941 [2024-12-12 20:31:32.995645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:48.941 [2024-12-12 20:31:32.995654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.941 [2024-12-12 20:31:32.995678] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:48.941 [2024-12-12 20:31:32.998305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.941 [2024-12-12 20:31:32.998447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:48.941 [2024-12-12 20:31:32.998467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.609 ms 00:20:48.941 [2024-12-12 20:31:32.998475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.941 [2024-12-12 20:31:32.998747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.941 [2024-12-12 20:31:32.998763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:48.941 [2024-12-12 20:31:32.998773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.242 ms 00:20:48.941 [2024-12-12 20:31:32.998781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.941 [2024-12-12 20:31:33.002009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.941 [2024-12-12 20:31:33.002101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:48.941 [2024-12-12 20:31:33.002116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.211 ms 00:20:48.941 [2024-12-12 20:31:33.002124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.941 [2024-12-12 20:31:33.008483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.941 [2024-12-12 20:31:33.008582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:48.941 [2024-12-12 20:31:33.008600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.338 ms 00:20:48.941 [2024-12-12 20:31:33.008608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.941 [2024-12-12 20:31:33.032119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.941 [2024-12-12 20:31:33.032227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:48.941 [2024-12-12 20:31:33.032245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.446 ms 00:20:48.941 [2024-12-12 20:31:33.032252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.941 [2024-12-12 20:31:33.046750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.941 [2024-12-12 20:31:33.046861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:48.941 [2024-12-12 20:31:33.046879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.461 ms 00:20:48.941 [2024-12-12 20:31:33.046887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.941 [2024-12-12 20:31:33.047029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.941 [2024-12-12 20:31:33.047040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:48.941 [2024-12-12 20:31:33.047050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:20:48.941 [2024-12-12 20:31:33.047057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.941 [2024-12-12 20:31:33.069685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.941 [2024-12-12 20:31:33.069802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:48.941 [2024-12-12 20:31:33.069820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.607 ms 00:20:48.941 [2024-12-12 20:31:33.069827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.941 [2024-12-12 20:31:33.092246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.941 [2024-12-12 20:31:33.092275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:48.941 [2024-12-12 20:31:33.092287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.388 ms 00:20:48.941 [2024-12-12 20:31:33.092294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.941 [2024-12-12 20:31:33.114276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.941 [2024-12-12 20:31:33.114305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:48.941 [2024-12-12 20:31:33.114316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.946 ms 00:20:48.941 [2024-12-12 20:31:33.114323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.941 [2024-12-12 20:31:33.136465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.941 [2024-12-12 20:31:33.136569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:48.941 [2024-12-12 20:31:33.136585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.074 ms 00:20:48.941 [2024-12-12 20:31:33.136592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.941 [2024-12-12 20:31:33.136622] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:48.941 [2024-12-12 20:31:33.136635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:48.941 [2024-12-12 20:31:33.136648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:48.941 [2024-12-12 20:31:33.136656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:48.941 [2024-12-12 20:31:33.136665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:48.941 [2024-12-12 20:31:33.136673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:48.941 [2024-12-12 20:31:33.136681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:48.941 [2024-12-12 20:31:33.136689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:48.941 [2024-12-12 20:31:33.136700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:48.941 [2024-12-12 20:31:33.136707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:48.941 [2024-12-12 20:31:33.136716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:48.942 [2024-12-12 20:31:33.136724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:48.942 [2024-12-12 20:31:33.136733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:48.942 [2024-12-12 20:31:33.136740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:48.942 [2024-12-12 20:31:33.136749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:48.942 [2024-12-12 20:31:33.136756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:48.942 [2024-12-12 20:31:33.136765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:48.942 [2024-12-12 20:31:33.136773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:48.942 [2024-12-12 20:31:33.136783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:48.942 [2024-12-12 20:31:33.136790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:48.942 [2024-12-12 20:31:33.136798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:48.942 [2024-12-12 20:31:33.136806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:48.942 [2024-12-12 20:31:33.136814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:48.942 [2024-12-12 20:31:33.136821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:48.942 [2024-12-12 20:31:33.136832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:48.942 [2024-12-12 20:31:33.136839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:48.942 [2024-12-12 20:31:33.136848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:48.942 [2024-12-12 20:31:33.136855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:48.942 [2024-12-12 20:31:33.136865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:48.942 [2024-12-12 20:31:33.136873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:48.942 [2024-12-12 20:31:33.136883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:48.942 [2024-12-12 20:31:33.136890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:48.942 [2024-12-12 20:31:33.136899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:48.942 [2024-12-12 20:31:33.136907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:48.942 [2024-12-12 20:31:33.136915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:48.942 [2024-12-12 20:31:33.136923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:48.942 [2024-12-12 20:31:33.136931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:48.942 [2024-12-12 20:31:33.136939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:48.942 [2024-12-12 20:31:33.136947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:48.942 [2024-12-12 20:31:33.136954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:48.942 [2024-12-12 20:31:33.136965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:48.942 [2024-12-12 20:31:33.136972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:48.942 [2024-12-12 20:31:33.136980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:48.942 [2024-12-12 20:31:33.136988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:48.942 [2024-12-12 20:31:33.136998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:48.942 [2024-12-12 20:31:33.137005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:48.942 [2024-12-12 20:31:33.137014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:48.942 [2024-12-12 20:31:33.137021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:48.942 [2024-12-12 20:31:33.137030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:48.942 [2024-12-12 20:31:33.137037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:48.942 [2024-12-12 20:31:33.137045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:48.942 [2024-12-12 20:31:33.137053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:48.942 [2024-12-12 20:31:33.137061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:48.942 [2024-12-12 20:31:33.137069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:48.942 [2024-12-12 20:31:33.137078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:48.942 [2024-12-12 20:31:33.137085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:48.942 [2024-12-12 20:31:33.137101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:48.942 [2024-12-12 20:31:33.137108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:48.942 [2024-12-12 20:31:33.137117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:48.942 [2024-12-12 20:31:33.137124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:48.942 [2024-12-12 20:31:33.137133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:48.942 [2024-12-12 20:31:33.137140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:48.942 [2024-12-12 20:31:33.137149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:48.942 [2024-12-12 20:31:33.137157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:48.942 [2024-12-12 20:31:33.137165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:48.942 [2024-12-12 20:31:33.137172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:48.942 [2024-12-12 20:31:33.137181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:48.942 [2024-12-12 20:31:33.137188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:48.942 [2024-12-12 20:31:33.137197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:48.942 [2024-12-12 20:31:33.137204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:48.942 [2024-12-12 20:31:33.137214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:48.942 [2024-12-12 20:31:33.137221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:48.942 [2024-12-12 20:31:33.137232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:48.942 [2024-12-12 20:31:33.137239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:48.942 [2024-12-12 20:31:33.137248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:48.942 [2024-12-12 20:31:33.137255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:48.942 [2024-12-12 20:31:33.137264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:48.942 [2024-12-12 20:31:33.137271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:48.942 [2024-12-12 20:31:33.137280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:48.942 [2024-12-12 20:31:33.137287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:48.942 [2024-12-12 20:31:33.137296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:48.942 [2024-12-12 20:31:33.137303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:48.942 [2024-12-12 20:31:33.137311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:48.942 [2024-12-12 20:31:33.137318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:48.942 [2024-12-12 20:31:33.137327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:48.942 [2024-12-12 20:31:33.137334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:48.942 [2024-12-12 20:31:33.137344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:48.942 [2024-12-12 20:31:33.137351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:48.942 [2024-12-12 20:31:33.137361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:48.942 [2024-12-12 20:31:33.137367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:48.942 [2024-12-12 20:31:33.137376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:48.942 [2024-12-12 20:31:33.137383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:48.942 [2024-12-12 20:31:33.137392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:48.942 [2024-12-12 20:31:33.137400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:48.942 [2024-12-12 20:31:33.137410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:48.942 [2024-12-12 20:31:33.137435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:48.942 [2024-12-12 20:31:33.137446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:48.942 [2024-12-12 20:31:33.137453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:48.942 [2024-12-12 20:31:33.137463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:48.942 [2024-12-12 20:31:33.137470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:48.942 [2024-12-12 20:31:33.137479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:48.942 [2024-12-12 20:31:33.137494] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:48.943 [2024-12-12 20:31:33.137503] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9c27c25e-6895-4077-a8c7-dd2dac7fe71c 00:20:48.943 [2024-12-12 20:31:33.137511] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:48.943 [2024-12-12 20:31:33.137520] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:48.943 [2024-12-12 20:31:33.137529] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:48.943 [2024-12-12 20:31:33.137538] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:48.943 [2024-12-12 20:31:33.137545] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:48.943 [2024-12-12 20:31:33.137553] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:48.943 [2024-12-12 20:31:33.137560] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:48.943 [2024-12-12 20:31:33.137568] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:48.943 [2024-12-12 20:31:33.137574] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:48.943 [2024-12-12 20:31:33.137583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.943 [2024-12-12 20:31:33.137590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:48.943 [2024-12-12 20:31:33.137600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.961 ms 00:20:48.943 [2024-12-12 20:31:33.137619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.943 [2024-12-12 20:31:33.150046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.943 [2024-12-12 20:31:33.150075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:48.943 [2024-12-12 20:31:33.150087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.394 ms 00:20:48.943 [2024-12-12 20:31:33.150095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.943 [2024-12-12 20:31:33.150455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.943 [2024-12-12 20:31:33.150465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:48.943 [2024-12-12 20:31:33.150478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.338 ms 00:20:48.943 [2024-12-12 20:31:33.150485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.201 [2024-12-12 20:31:33.191848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:49.201 [2024-12-12 20:31:33.191976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:49.201 [2024-12-12 20:31:33.191994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:49.201 [2024-12-12 20:31:33.192002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.201 [2024-12-12 20:31:33.192059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:49.201 [2024-12-12 20:31:33.192068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:49.201 [2024-12-12 20:31:33.192079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:49.201 [2024-12-12 20:31:33.192086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.201 [2024-12-12 20:31:33.192162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:49.201 [2024-12-12 20:31:33.192172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:49.201 [2024-12-12 20:31:33.192181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:49.201 [2024-12-12 20:31:33.192188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.201 [2024-12-12 20:31:33.192207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:49.201 [2024-12-12 20:31:33.192215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:49.201 [2024-12-12 20:31:33.192224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:49.201 [2024-12-12 20:31:33.192233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.201 [2024-12-12 20:31:33.267245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:49.201 [2024-12-12 20:31:33.267395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:49.201 [2024-12-12 20:31:33.267436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:49.201 [2024-12-12 20:31:33.267445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.201 [2024-12-12 20:31:33.329494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:49.202 [2024-12-12 20:31:33.329633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:49.202 [2024-12-12 20:31:33.329650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:49.202 [2024-12-12 20:31:33.329661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.202 [2024-12-12 20:31:33.329732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:49.202 [2024-12-12 20:31:33.329741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:49.202 [2024-12-12 20:31:33.329751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:49.202 [2024-12-12 20:31:33.329758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.202 [2024-12-12 20:31:33.329816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:49.202 [2024-12-12 20:31:33.329825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:49.202 [2024-12-12 20:31:33.329834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:49.202 [2024-12-12 20:31:33.329842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.202 [2024-12-12 20:31:33.329936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:49.202 [2024-12-12 20:31:33.329945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:49.202 [2024-12-12 20:31:33.329954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:49.202 [2024-12-12 20:31:33.329961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.202 [2024-12-12 20:31:33.329992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:49.202 [2024-12-12 20:31:33.330001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:49.202 [2024-12-12 20:31:33.330010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:49.202 [2024-12-12 20:31:33.330018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.202 [2024-12-12 20:31:33.330054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:49.202 [2024-12-12 20:31:33.330063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:49.202 [2024-12-12 20:31:33.330071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:49.202 [2024-12-12 20:31:33.330078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.202 [2024-12-12 20:31:33.330119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:49.202 [2024-12-12 20:31:33.330128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:49.202 [2024-12-12 20:31:33.330137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:49.202 [2024-12-12 20:31:33.330144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.202 [2024-12-12 20:31:33.330266] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 334.654 ms, result 0 00:20:49.202 true 00:20:49.202 20:31:33 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 78736 00:20:49.202 20:31:33 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 78736 ']' 00:20:49.202 20:31:33 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 78736 00:20:49.202 20:31:33 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:20:49.202 20:31:33 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:49.202 20:31:33 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78736 00:20:49.202 killing process with pid 78736 00:20:49.202 20:31:33 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:49.202 20:31:33 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:49.202 20:31:33 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78736' 00:20:49.202 20:31:33 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 78736 00:20:49.202 20:31:33 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 78736 00:20:55.786 20:31:39 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:20:59.969 262144+0 records in 00:20:59.969 262144+0 records out 00:20:59.969 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.28239 s, 251 MB/s 00:20:59.969 20:31:43 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:21:01.868 20:31:45 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:01.868 [2024-12-12 20:31:45.856583] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:21:01.868 [2024-12-12 20:31:45.856833] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78961 ] 00:21:01.868 [2024-12-12 20:31:46.017361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:02.127 [2024-12-12 20:31:46.116783] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:21:02.386 [2024-12-12 20:31:46.373444] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:02.386 [2024-12-12 20:31:46.374026] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:02.386 [2024-12-12 20:31:46.530678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.386 [2024-12-12 20:31:46.530825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:02.386 [2024-12-12 20:31:46.530844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:02.386 [2024-12-12 20:31:46.530852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.386 [2024-12-12 20:31:46.530906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.386 [2024-12-12 20:31:46.530918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:02.386 [2024-12-12 20:31:46.530926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:21:02.386 [2024-12-12 20:31:46.530934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.386 [2024-12-12 20:31:46.530953] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:02.386 [2024-12-12 20:31:46.531637] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:02.386 [2024-12-12 20:31:46.531654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.386 [2024-12-12 20:31:46.531661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:02.386 [2024-12-12 20:31:46.531670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.705 ms 00:21:02.386 [2024-12-12 20:31:46.531677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.386 [2024-12-12 20:31:46.532684] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:02.386 [2024-12-12 20:31:46.545633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.386 [2024-12-12 20:31:46.545760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:02.386 [2024-12-12 20:31:46.545777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.950 ms 00:21:02.386 [2024-12-12 20:31:46.545786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.386 [2024-12-12 20:31:46.545839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.386 [2024-12-12 20:31:46.545848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:02.386 [2024-12-12 20:31:46.545857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:21:02.386 [2024-12-12 20:31:46.545864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.386 [2024-12-12 20:31:46.550798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.386 [2024-12-12 20:31:46.550825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:02.386 [2024-12-12 20:31:46.550835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.872 ms 00:21:02.386 [2024-12-12 20:31:46.550847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.386 [2024-12-12 20:31:46.550917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.386 [2024-12-12 20:31:46.550926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:02.386 [2024-12-12 20:31:46.550934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:21:02.386 [2024-12-12 20:31:46.550941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.386 [2024-12-12 20:31:46.550972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.386 [2024-12-12 20:31:46.550981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:02.386 [2024-12-12 20:31:46.550989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:02.386 [2024-12-12 20:31:46.550996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.386 [2024-12-12 20:31:46.551018] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:02.386 [2024-12-12 20:31:46.554319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.386 [2024-12-12 20:31:46.554345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:02.386 [2024-12-12 20:31:46.554358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.305 ms 00:21:02.386 [2024-12-12 20:31:46.554366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.386 [2024-12-12 20:31:46.554396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.386 [2024-12-12 20:31:46.554406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:02.386 [2024-12-12 20:31:46.554423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:02.386 [2024-12-12 20:31:46.554431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.386 [2024-12-12 20:31:46.554451] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:02.386 [2024-12-12 20:31:46.554472] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:02.386 [2024-12-12 20:31:46.554507] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:02.386 [2024-12-12 20:31:46.554525] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:02.386 [2024-12-12 20:31:46.554630] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:02.386 [2024-12-12 20:31:46.554641] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:02.386 [2024-12-12 20:31:46.554652] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:02.386 [2024-12-12 20:31:46.554663] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:02.386 [2024-12-12 20:31:46.554673] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:02.386 [2024-12-12 20:31:46.554682] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:02.386 [2024-12-12 20:31:46.554690] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:02.386 [2024-12-12 20:31:46.554698] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:02.386 [2024-12-12 20:31:46.554709] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:02.386 [2024-12-12 20:31:46.554717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.387 [2024-12-12 20:31:46.554725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:02.387 [2024-12-12 20:31:46.554734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.269 ms 00:21:02.387 [2024-12-12 20:31:46.554742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.387 [2024-12-12 20:31:46.554826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.387 [2024-12-12 20:31:46.554835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:02.387 [2024-12-12 20:31:46.554843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:21:02.387 [2024-12-12 20:31:46.554850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.387 [2024-12-12 20:31:46.554962] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:02.387 [2024-12-12 20:31:46.554974] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:02.387 [2024-12-12 20:31:46.554982] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:02.387 [2024-12-12 20:31:46.554991] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:02.387 [2024-12-12 20:31:46.554999] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:02.387 [2024-12-12 20:31:46.555007] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:02.387 [2024-12-12 20:31:46.555015] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:02.387 [2024-12-12 20:31:46.555023] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:02.387 [2024-12-12 20:31:46.555031] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:02.387 [2024-12-12 20:31:46.555039] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:02.387 [2024-12-12 20:31:46.555046] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:02.387 [2024-12-12 20:31:46.555054] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:02.387 [2024-12-12 20:31:46.555062] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:02.387 [2024-12-12 20:31:46.555076] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:02.387 [2024-12-12 20:31:46.555084] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:02.387 [2024-12-12 20:31:46.555091] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:02.387 [2024-12-12 20:31:46.555099] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:02.387 [2024-12-12 20:31:46.555107] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:02.387 [2024-12-12 20:31:46.555114] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:02.387 [2024-12-12 20:31:46.555122] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:02.387 [2024-12-12 20:31:46.555130] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:02.387 [2024-12-12 20:31:46.555137] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:02.387 [2024-12-12 20:31:46.555145] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:02.387 [2024-12-12 20:31:46.555152] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:02.387 [2024-12-12 20:31:46.555160] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:02.387 [2024-12-12 20:31:46.555167] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:02.387 [2024-12-12 20:31:46.555175] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:02.387 [2024-12-12 20:31:46.555182] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:02.387 [2024-12-12 20:31:46.555190] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:02.387 [2024-12-12 20:31:46.555198] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:02.387 [2024-12-12 20:31:46.555205] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:02.387 [2024-12-12 20:31:46.555213] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:02.387 [2024-12-12 20:31:46.555220] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:02.387 [2024-12-12 20:31:46.555227] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:02.387 [2024-12-12 20:31:46.555235] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:02.387 [2024-12-12 20:31:46.555242] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:02.387 [2024-12-12 20:31:46.555249] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:02.387 [2024-12-12 20:31:46.555257] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:02.387 [2024-12-12 20:31:46.555265] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:02.387 [2024-12-12 20:31:46.555272] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:02.387 [2024-12-12 20:31:46.555280] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:02.387 [2024-12-12 20:31:46.555287] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:02.387 [2024-12-12 20:31:46.555295] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:02.387 [2024-12-12 20:31:46.555303] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:02.387 [2024-12-12 20:31:46.555311] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:02.387 [2024-12-12 20:31:46.555321] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:02.387 [2024-12-12 20:31:46.555329] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:02.387 [2024-12-12 20:31:46.555338] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:02.387 [2024-12-12 20:31:46.555345] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:02.387 [2024-12-12 20:31:46.555353] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:02.387 [2024-12-12 20:31:46.555360] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:02.387 [2024-12-12 20:31:46.555368] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:02.387 [2024-12-12 20:31:46.555374] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:02.387 [2024-12-12 20:31:46.555383] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:02.387 [2024-12-12 20:31:46.555392] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:02.387 [2024-12-12 20:31:46.555402] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:02.387 [2024-12-12 20:31:46.555409] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:02.387 [2024-12-12 20:31:46.555426] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:02.387 [2024-12-12 20:31:46.555434] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:02.387 [2024-12-12 20:31:46.555441] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:02.387 [2024-12-12 20:31:46.555448] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:02.387 [2024-12-12 20:31:46.555454] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:02.387 [2024-12-12 20:31:46.555461] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:02.387 [2024-12-12 20:31:46.555468] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:02.387 [2024-12-12 20:31:46.555475] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:02.387 [2024-12-12 20:31:46.555481] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:02.387 [2024-12-12 20:31:46.555488] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:02.387 [2024-12-12 20:31:46.555495] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:02.387 [2024-12-12 20:31:46.555502] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:02.387 [2024-12-12 20:31:46.555517] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:02.387 [2024-12-12 20:31:46.555524] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:02.387 [2024-12-12 20:31:46.555533] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:02.387 [2024-12-12 20:31:46.555541] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:02.387 [2024-12-12 20:31:46.555548] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:02.387 [2024-12-12 20:31:46.555555] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:02.387 [2024-12-12 20:31:46.555562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.387 [2024-12-12 20:31:46.555569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:02.387 [2024-12-12 20:31:46.555577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.668 ms 00:21:02.387 [2024-12-12 20:31:46.555585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.387 [2024-12-12 20:31:46.581296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.387 [2024-12-12 20:31:46.581406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:02.387 [2024-12-12 20:31:46.581474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.668 ms 00:21:02.387 [2024-12-12 20:31:46.581813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.387 [2024-12-12 20:31:46.581945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.387 [2024-12-12 20:31:46.582082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:02.387 [2024-12-12 20:31:46.582116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:21:02.387 [2024-12-12 20:31:46.582136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.646 [2024-12-12 20:31:46.627297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.646 [2024-12-12 20:31:46.627450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:02.646 [2024-12-12 20:31:46.627524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.033 ms 00:21:02.646 [2024-12-12 20:31:46.627552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.646 [2024-12-12 20:31:46.627608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.646 [2024-12-12 20:31:46.627635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:02.646 [2024-12-12 20:31:46.627662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:02.646 [2024-12-12 20:31:46.627683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.646 [2024-12-12 20:31:46.628054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.646 [2024-12-12 20:31:46.628130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:02.646 [2024-12-12 20:31:46.628176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.291 ms 00:21:02.646 [2024-12-12 20:31:46.628200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.646 [2024-12-12 20:31:46.628349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.646 [2024-12-12 20:31:46.628407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:02.646 [2024-12-12 20:31:46.628447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:21:02.646 [2024-12-12 20:31:46.628469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.646 [2024-12-12 20:31:46.641354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.646 [2024-12-12 20:31:46.641469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:02.646 [2024-12-12 20:31:46.641518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.825 ms 00:21:02.646 [2024-12-12 20:31:46.641539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.646 [2024-12-12 20:31:46.654426] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:21:02.646 [2024-12-12 20:31:46.654545] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:02.646 [2024-12-12 20:31:46.654601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.646 [2024-12-12 20:31:46.654622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:02.646 [2024-12-12 20:31:46.654641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.960 ms 00:21:02.646 [2024-12-12 20:31:46.654659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.646 [2024-12-12 20:31:46.679810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.646 [2024-12-12 20:31:46.679944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:02.646 [2024-12-12 20:31:46.680000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.645 ms 00:21:02.646 [2024-12-12 20:31:46.680025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.646 [2024-12-12 20:31:46.691689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.646 [2024-12-12 20:31:46.691794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:02.646 [2024-12-12 20:31:46.691841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.619 ms 00:21:02.646 [2024-12-12 20:31:46.691862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.646 [2024-12-12 20:31:46.703435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.646 [2024-12-12 20:31:46.703540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:02.646 [2024-12-12 20:31:46.703587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.535 ms 00:21:02.646 [2024-12-12 20:31:46.703608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.646 [2024-12-12 20:31:46.704189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.646 [2024-12-12 20:31:46.704264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:02.646 [2024-12-12 20:31:46.704309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.495 ms 00:21:02.646 [2024-12-12 20:31:46.704336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.646 [2024-12-12 20:31:46.760335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.646 [2024-12-12 20:31:46.760482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:02.646 [2024-12-12 20:31:46.760534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.970 ms 00:21:02.646 [2024-12-12 20:31:46.760562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.646 [2024-12-12 20:31:46.771031] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:02.646 [2024-12-12 20:31:46.773326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.646 [2024-12-12 20:31:46.773432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:02.646 [2024-12-12 20:31:46.773483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.718 ms 00:21:02.646 [2024-12-12 20:31:46.773508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.646 [2024-12-12 20:31:46.773608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.646 [2024-12-12 20:31:46.773638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:02.646 [2024-12-12 20:31:46.773709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:02.646 [2024-12-12 20:31:46.773730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.646 [2024-12-12 20:31:46.773846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.647 [2024-12-12 20:31:46.773902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:02.647 [2024-12-12 20:31:46.773945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:21:02.647 [2024-12-12 20:31:46.773986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.647 [2024-12-12 20:31:46.774023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.647 [2024-12-12 20:31:46.774067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:02.647 [2024-12-12 20:31:46.774090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:02.647 [2024-12-12 20:31:46.774138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.647 [2024-12-12 20:31:46.774186] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:02.647 [2024-12-12 20:31:46.774212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.647 [2024-12-12 20:31:46.774260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:02.647 [2024-12-12 20:31:46.774282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:21:02.647 [2024-12-12 20:31:46.774301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.647 [2024-12-12 20:31:46.797556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.647 [2024-12-12 20:31:46.797673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:02.647 [2024-12-12 20:31:46.797720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.202 ms 00:21:02.647 [2024-12-12 20:31:46.797747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.647 [2024-12-12 20:31:46.797837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.647 [2024-12-12 20:31:46.797862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:02.647 [2024-12-12 20:31:46.797902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:21:02.647 [2024-12-12 20:31:46.797923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.647 [2024-12-12 20:31:46.798875] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 267.798 ms, result 0 00:21:04.023  [2024-12-12T20:31:48.823Z] Copying: 11/1024 [MB] (11 MBps) [2024-12-12T20:31:50.203Z] Copying: 58/1024 [MB] (46 MBps) [2024-12-12T20:31:51.135Z] Copying: 86/1024 [MB] (27 MBps) [2024-12-12T20:31:52.070Z] Copying: 105/1024 [MB] (18 MBps) [2024-12-12T20:31:53.004Z] Copying: 123/1024 [MB] (18 MBps) [2024-12-12T20:31:53.939Z] Copying: 143/1024 [MB] (20 MBps) [2024-12-12T20:31:54.908Z] Copying: 188/1024 [MB] (44 MBps) [2024-12-12T20:31:55.861Z] Copying: 222/1024 [MB] (33 MBps) [2024-12-12T20:31:57.235Z] Copying: 240/1024 [MB] (18 MBps) [2024-12-12T20:31:58.169Z] Copying: 256/1024 [MB] (16 MBps) [2024-12-12T20:31:59.103Z] Copying: 270/1024 [MB] (13 MBps) [2024-12-12T20:32:00.037Z] Copying: 285/1024 [MB] (14 MBps) [2024-12-12T20:32:00.971Z] Copying: 303/1024 [MB] (17 MBps) [2024-12-12T20:32:01.904Z] Copying: 322/1024 [MB] (19 MBps) [2024-12-12T20:32:02.839Z] Copying: 333/1024 [MB] (11 MBps) [2024-12-12T20:32:04.210Z] Copying: 345/1024 [MB] (11 MBps) [2024-12-12T20:32:05.144Z] Copying: 356/1024 [MB] (11 MBps) [2024-12-12T20:32:06.078Z] Copying: 367/1024 [MB] (11 MBps) [2024-12-12T20:32:07.014Z] Copying: 379/1024 [MB] (11 MBps) [2024-12-12T20:32:07.972Z] Copying: 390/1024 [MB] (11 MBps) [2024-12-12T20:32:08.920Z] Copying: 401/1024 [MB] (11 MBps) [2024-12-12T20:32:09.854Z] Copying: 413/1024 [MB] (11 MBps) [2024-12-12T20:32:11.229Z] Copying: 425/1024 [MB] (11 MBps) [2024-12-12T20:32:12.163Z] Copying: 436/1024 [MB] (11 MBps) [2024-12-12T20:32:13.098Z] Copying: 447/1024 [MB] (11 MBps) [2024-12-12T20:32:14.033Z] Copying: 465/1024 [MB] (17 MBps) [2024-12-12T20:32:14.967Z] Copying: 515/1024 [MB] (50 MBps) [2024-12-12T20:32:15.900Z] Copying: 533/1024 [MB] (17 MBps) [2024-12-12T20:32:16.836Z] Copying: 551/1024 [MB] (17 MBps) [2024-12-12T20:32:18.225Z] Copying: 571/1024 [MB] (19 MBps) [2024-12-12T20:32:19.160Z] Copying: 588/1024 [MB] (17 MBps) [2024-12-12T20:32:20.097Z] Copying: 602/1024 [MB] (14 MBps) [2024-12-12T20:32:21.032Z] Copying: 623/1024 [MB] (21 MBps) [2024-12-12T20:32:21.968Z] Copying: 645/1024 [MB] (21 MBps) [2024-12-12T20:32:22.907Z] Copying: 667/1024 [MB] (21 MBps) [2024-12-12T20:32:23.842Z] Copying: 721/1024 [MB] (54 MBps) [2024-12-12T20:32:25.216Z] Copying: 753/1024 [MB] (32 MBps) [2024-12-12T20:32:26.152Z] Copying: 778/1024 [MB] (24 MBps) [2024-12-12T20:32:27.087Z] Copying: 803/1024 [MB] (24 MBps) [2024-12-12T20:32:28.020Z] Copying: 824/1024 [MB] (21 MBps) [2024-12-12T20:32:28.955Z] Copying: 862/1024 [MB] (38 MBps) [2024-12-12T20:32:29.888Z] Copying: 888/1024 [MB] (25 MBps) [2024-12-12T20:32:30.822Z] Copying: 912/1024 [MB] (24 MBps) [2024-12-12T20:32:32.194Z] Copying: 934/1024 [MB] (21 MBps) [2024-12-12T20:32:33.127Z] Copying: 956/1024 [MB] (21 MBps) [2024-12-12T20:32:34.102Z] Copying: 973/1024 [MB] (16 MBps) [2024-12-12T20:32:35.037Z] Copying: 992/1024 [MB] (19 MBps) [2024-12-12T20:32:35.971Z] Copying: 1010/1024 [MB] (18 MBps) [2024-12-12T20:32:35.971Z] Copying: 1024/1024 [MB] (average 20 MBps)[2024-12-12 20:32:35.623908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.743 [2024-12-12 20:32:35.624028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:51.743 [2024-12-12 20:32:35.624074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:51.743 [2024-12-12 20:32:35.624224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.743 [2024-12-12 20:32:35.624265] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:51.743 [2024-12-12 20:32:35.627288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.743 [2024-12-12 20:32:35.627325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:51.743 [2024-12-12 20:32:35.627343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.001 ms 00:21:51.743 [2024-12-12 20:32:35.627364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.743 [2024-12-12 20:32:35.630093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.743 [2024-12-12 20:32:35.630129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:51.743 [2024-12-12 20:32:35.630144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.692 ms 00:21:51.743 [2024-12-12 20:32:35.630155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.743 [2024-12-12 20:32:35.650262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.743 [2024-12-12 20:32:35.650300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:51.743 [2024-12-12 20:32:35.650315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.086 ms 00:21:51.743 [2024-12-12 20:32:35.650326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.743 [2024-12-12 20:32:35.656554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.743 [2024-12-12 20:32:35.656588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:51.743 [2024-12-12 20:32:35.656602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.183 ms 00:21:51.743 [2024-12-12 20:32:35.656612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.743 [2024-12-12 20:32:35.680372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.743 [2024-12-12 20:32:35.680538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:51.743 [2024-12-12 20:32:35.680559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.704 ms 00:21:51.743 [2024-12-12 20:32:35.680570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.743 [2024-12-12 20:32:35.694321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.743 [2024-12-12 20:32:35.694356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:51.743 [2024-12-12 20:32:35.694373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.681 ms 00:21:51.743 [2024-12-12 20:32:35.694383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.743 [2024-12-12 20:32:35.694576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.743 [2024-12-12 20:32:35.694607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:51.743 [2024-12-12 20:32:35.694620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:21:51.743 [2024-12-12 20:32:35.694632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.743 [2024-12-12 20:32:35.718139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.743 [2024-12-12 20:32:35.718171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:51.743 [2024-12-12 20:32:35.718186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.486 ms 00:21:51.743 [2024-12-12 20:32:35.718196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.743 [2024-12-12 20:32:35.741757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.743 [2024-12-12 20:32:35.741789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:51.743 [2024-12-12 20:32:35.741803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.521 ms 00:21:51.743 [2024-12-12 20:32:35.741813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.743 [2024-12-12 20:32:35.764685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.743 [2024-12-12 20:32:35.764719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:51.743 [2024-12-12 20:32:35.764734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.831 ms 00:21:51.743 [2024-12-12 20:32:35.764744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.743 [2024-12-12 20:32:35.787826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.743 [2024-12-12 20:32:35.787954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:51.743 [2024-12-12 20:32:35.787975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.014 ms 00:21:51.743 [2024-12-12 20:32:35.787986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.743 [2024-12-12 20:32:35.788022] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:51.743 [2024-12-12 20:32:35.788042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:51.743 [2024-12-12 20:32:35.788062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:51.743 [2024-12-12 20:32:35.788075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:51.743 [2024-12-12 20:32:35.788088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:51.743 [2024-12-12 20:32:35.788101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:51.743 [2024-12-12 20:32:35.788114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:51.743 [2024-12-12 20:32:35.788127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:51.743 [2024-12-12 20:32:35.788140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:51.743 [2024-12-12 20:32:35.788153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:51.743 [2024-12-12 20:32:35.788166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:51.743 [2024-12-12 20:32:35.788179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:51.743 [2024-12-12 20:32:35.788191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:51.743 [2024-12-12 20:32:35.788204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:51.743 [2024-12-12 20:32:35.788217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:51.743 [2024-12-12 20:32:35.788230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:51.743 [2024-12-12 20:32:35.788242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:51.743 [2024-12-12 20:32:35.788255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:51.743 [2024-12-12 20:32:35.788267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:51.743 [2024-12-12 20:32:35.788279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:51.743 [2024-12-12 20:32:35.788292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:51.743 [2024-12-12 20:32:35.788305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:51.743 [2024-12-12 20:32:35.788318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:51.743 [2024-12-12 20:32:35.788330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:51.743 [2024-12-12 20:32:35.788343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:51.743 [2024-12-12 20:32:35.788355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:51.743 [2024-12-12 20:32:35.788368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:51.743 [2024-12-12 20:32:35.788381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:51.743 [2024-12-12 20:32:35.788394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:51.743 [2024-12-12 20:32:35.788407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:51.743 [2024-12-12 20:32:35.788447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:51.743 [2024-12-12 20:32:35.788461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:51.743 [2024-12-12 20:32:35.788474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:51.743 [2024-12-12 20:32:35.788486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:51.743 [2024-12-12 20:32:35.788499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:51.743 [2024-12-12 20:32:35.788512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:51.743 [2024-12-12 20:32:35.788524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:51.743 [2024-12-12 20:32:35.788537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:51.743 [2024-12-12 20:32:35.788550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:51.743 [2024-12-12 20:32:35.788564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:51.743 [2024-12-12 20:32:35.788577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:51.743 [2024-12-12 20:32:35.788590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:51.744 [2024-12-12 20:32:35.788602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:51.744 [2024-12-12 20:32:35.788615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:51.744 [2024-12-12 20:32:35.788629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:51.744 [2024-12-12 20:32:35.788642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:51.744 [2024-12-12 20:32:35.788654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:51.744 [2024-12-12 20:32:35.788668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:51.744 [2024-12-12 20:32:35.788680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:51.744 [2024-12-12 20:32:35.788693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:51.744 [2024-12-12 20:32:35.788705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:51.744 [2024-12-12 20:32:35.788719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:51.744 [2024-12-12 20:32:35.788732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:51.744 [2024-12-12 20:32:35.788745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:51.744 [2024-12-12 20:32:35.788758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:51.744 [2024-12-12 20:32:35.788771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:51.744 [2024-12-12 20:32:35.788784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:51.744 [2024-12-12 20:32:35.788796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:51.744 [2024-12-12 20:32:35.788809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:51.744 [2024-12-12 20:32:35.788822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:51.744 [2024-12-12 20:32:35.788834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:51.744 [2024-12-12 20:32:35.788847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:51.744 [2024-12-12 20:32:35.788859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:51.744 [2024-12-12 20:32:35.788872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:51.744 [2024-12-12 20:32:35.788884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:51.744 [2024-12-12 20:32:35.788896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:51.744 [2024-12-12 20:32:35.788909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:51.744 [2024-12-12 20:32:35.788922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:51.744 [2024-12-12 20:32:35.788934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:51.744 [2024-12-12 20:32:35.788947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:51.744 [2024-12-12 20:32:35.788959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:51.744 [2024-12-12 20:32:35.788972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:51.744 [2024-12-12 20:32:35.788984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:51.744 [2024-12-12 20:32:35.788997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:51.744 [2024-12-12 20:32:35.789010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:51.744 [2024-12-12 20:32:35.789023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:51.744 [2024-12-12 20:32:35.789036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:51.744 [2024-12-12 20:32:35.789049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:51.744 [2024-12-12 20:32:35.789061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:51.744 [2024-12-12 20:32:35.789074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:51.744 [2024-12-12 20:32:35.789087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:51.744 [2024-12-12 20:32:35.789099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:51.744 [2024-12-12 20:32:35.789112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:51.744 [2024-12-12 20:32:35.789125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:51.744 [2024-12-12 20:32:35.789137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:51.744 [2024-12-12 20:32:35.789150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:51.744 [2024-12-12 20:32:35.789163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:51.744 [2024-12-12 20:32:35.789175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:51.744 [2024-12-12 20:32:35.789188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:51.744 [2024-12-12 20:32:35.789201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:51.744 [2024-12-12 20:32:35.789214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:51.744 [2024-12-12 20:32:35.789230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:51.744 [2024-12-12 20:32:35.789242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:51.744 [2024-12-12 20:32:35.789255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:51.744 [2024-12-12 20:32:35.789268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:51.744 [2024-12-12 20:32:35.789280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:51.744 [2024-12-12 20:32:35.789293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:51.744 [2024-12-12 20:32:35.789306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:51.744 [2024-12-12 20:32:35.789318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:51.744 [2024-12-12 20:32:35.789331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:51.744 [2024-12-12 20:32:35.789344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:51.744 [2024-12-12 20:32:35.789367] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:51.744 [2024-12-12 20:32:35.789384] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9c27c25e-6895-4077-a8c7-dd2dac7fe71c 00:21:51.744 [2024-12-12 20:32:35.789398] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:51.744 [2024-12-12 20:32:35.789410] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:51.744 [2024-12-12 20:32:35.789432] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:51.744 [2024-12-12 20:32:35.789444] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:51.744 [2024-12-12 20:32:35.789455] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:51.744 [2024-12-12 20:32:35.789476] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:51.744 [2024-12-12 20:32:35.789488] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:51.744 [2024-12-12 20:32:35.789500] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:51.744 [2024-12-12 20:32:35.789510] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:51.744 [2024-12-12 20:32:35.789522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.744 [2024-12-12 20:32:35.789534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:51.744 [2024-12-12 20:32:35.789547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.500 ms 00:21:51.744 [2024-12-12 20:32:35.789560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.744 [2024-12-12 20:32:35.803255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.744 [2024-12-12 20:32:35.803382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:51.744 [2024-12-12 20:32:35.803402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.667 ms 00:21:51.744 [2024-12-12 20:32:35.803437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.744 [2024-12-12 20:32:35.803873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.744 [2024-12-12 20:32:35.803898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:51.744 [2024-12-12 20:32:35.803911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.390 ms 00:21:51.744 [2024-12-12 20:32:35.803928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.744 [2024-12-12 20:32:35.836688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:51.744 [2024-12-12 20:32:35.836726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:51.744 [2024-12-12 20:32:35.836740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:51.744 [2024-12-12 20:32:35.836751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.744 [2024-12-12 20:32:35.836828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:51.744 [2024-12-12 20:32:35.836841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:51.744 [2024-12-12 20:32:35.836853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:51.744 [2024-12-12 20:32:35.836870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.744 [2024-12-12 20:32:35.836953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:51.744 [2024-12-12 20:32:35.836969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:51.744 [2024-12-12 20:32:35.836982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:51.744 [2024-12-12 20:32:35.836994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.744 [2024-12-12 20:32:35.837017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:51.744 [2024-12-12 20:32:35.837030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:51.744 [2024-12-12 20:32:35.837042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:51.744 [2024-12-12 20:32:35.837054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.744 [2024-12-12 20:32:35.915957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:51.744 [2024-12-12 20:32:35.916000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:51.744 [2024-12-12 20:32:35.916016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:51.745 [2024-12-12 20:32:35.916027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.003 [2024-12-12 20:32:35.979532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:52.003 [2024-12-12 20:32:35.979579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:52.003 [2024-12-12 20:32:35.979597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:52.003 [2024-12-12 20:32:35.979613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.003 [2024-12-12 20:32:35.979711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:52.003 [2024-12-12 20:32:35.979726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:52.003 [2024-12-12 20:32:35.979738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:52.003 [2024-12-12 20:32:35.979750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.003 [2024-12-12 20:32:35.979800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:52.003 [2024-12-12 20:32:35.979816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:52.003 [2024-12-12 20:32:35.979828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:52.003 [2024-12-12 20:32:35.979840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.003 [2024-12-12 20:32:35.979968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:52.003 [2024-12-12 20:32:35.979983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:52.003 [2024-12-12 20:32:35.979995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:52.003 [2024-12-12 20:32:35.980008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.003 [2024-12-12 20:32:35.980051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:52.003 [2024-12-12 20:32:35.980065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:52.003 [2024-12-12 20:32:35.980078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:52.003 [2024-12-12 20:32:35.980091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.003 [2024-12-12 20:32:35.980139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:52.003 [2024-12-12 20:32:35.980157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:52.003 [2024-12-12 20:32:35.980169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:52.003 [2024-12-12 20:32:35.980182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.003 [2024-12-12 20:32:35.980236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:52.003 [2024-12-12 20:32:35.980252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:52.003 [2024-12-12 20:32:35.980264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:52.003 [2024-12-12 20:32:35.980276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.003 [2024-12-12 20:32:35.980462] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 356.482 ms, result 0 00:21:52.940 00:21:52.940 00:21:52.940 20:32:37 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:21:52.940 [2024-12-12 20:32:37.093508] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:21:52.940 [2024-12-12 20:32:37.093781] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79493 ] 00:21:53.198 [2024-12-12 20:32:37.253959] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:53.198 [2024-12-12 20:32:37.349880] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:21:53.457 [2024-12-12 20:32:37.609263] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:53.457 [2024-12-12 20:32:37.609333] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:53.716 [2024-12-12 20:32:37.766310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.716 [2024-12-12 20:32:37.766368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:53.716 [2024-12-12 20:32:37.766387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:53.716 [2024-12-12 20:32:37.766399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.716 [2024-12-12 20:32:37.766489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.716 [2024-12-12 20:32:37.766510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:53.716 [2024-12-12 20:32:37.766526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:21:53.716 [2024-12-12 20:32:37.766539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.716 [2024-12-12 20:32:37.766588] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:53.716 [2024-12-12 20:32:37.767400] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:53.716 [2024-12-12 20:32:37.767485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.716 [2024-12-12 20:32:37.767500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:53.716 [2024-12-12 20:32:37.767513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.903 ms 00:21:53.716 [2024-12-12 20:32:37.767525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.716 [2024-12-12 20:32:37.768705] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:53.716 [2024-12-12 20:32:37.781369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.716 [2024-12-12 20:32:37.781408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:53.716 [2024-12-12 20:32:37.781448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.666 ms 00:21:53.716 [2024-12-12 20:32:37.781459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.716 [2024-12-12 20:32:37.781535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.716 [2024-12-12 20:32:37.781552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:53.716 [2024-12-12 20:32:37.781565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:21:53.716 [2024-12-12 20:32:37.781577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.716 [2024-12-12 20:32:37.786650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.716 [2024-12-12 20:32:37.786683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:53.716 [2024-12-12 20:32:37.786697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.998 ms 00:21:53.716 [2024-12-12 20:32:37.786712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.716 [2024-12-12 20:32:37.786801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.716 [2024-12-12 20:32:37.786815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:53.716 [2024-12-12 20:32:37.786829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:21:53.716 [2024-12-12 20:32:37.786842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.716 [2024-12-12 20:32:37.786909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.716 [2024-12-12 20:32:37.786924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:53.716 [2024-12-12 20:32:37.786937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:53.716 [2024-12-12 20:32:37.786949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.716 [2024-12-12 20:32:37.786985] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:53.716 [2024-12-12 20:32:37.790470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.716 [2024-12-12 20:32:37.790503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:53.716 [2024-12-12 20:32:37.790520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.492 ms 00:21:53.716 [2024-12-12 20:32:37.790532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.716 [2024-12-12 20:32:37.790574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.716 [2024-12-12 20:32:37.790588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:53.716 [2024-12-12 20:32:37.790600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:21:53.716 [2024-12-12 20:32:37.790612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.716 [2024-12-12 20:32:37.790641] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:53.716 [2024-12-12 20:32:37.790669] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:53.716 [2024-12-12 20:32:37.790718] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:53.716 [2024-12-12 20:32:37.790745] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:53.716 [2024-12-12 20:32:37.790887] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:53.716 [2024-12-12 20:32:37.790904] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:53.716 [2024-12-12 20:32:37.790920] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:53.716 [2024-12-12 20:32:37.790936] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:53.716 [2024-12-12 20:32:37.790951] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:53.716 [2024-12-12 20:32:37.790964] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:53.716 [2024-12-12 20:32:37.790976] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:53.716 [2024-12-12 20:32:37.790988] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:53.716 [2024-12-12 20:32:37.791003] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:53.716 [2024-12-12 20:32:37.791015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.716 [2024-12-12 20:32:37.791028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:53.716 [2024-12-12 20:32:37.791041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.377 ms 00:21:53.716 [2024-12-12 20:32:37.791053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.716 [2024-12-12 20:32:37.791168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.716 [2024-12-12 20:32:37.791182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:53.716 [2024-12-12 20:32:37.791195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:21:53.716 [2024-12-12 20:32:37.791207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.716 [2024-12-12 20:32:37.791362] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:53.716 [2024-12-12 20:32:37.791379] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:53.716 [2024-12-12 20:32:37.791393] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:53.716 [2024-12-12 20:32:37.791406] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:53.716 [2024-12-12 20:32:37.791443] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:53.716 [2024-12-12 20:32:37.791456] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:53.716 [2024-12-12 20:32:37.791467] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:53.716 [2024-12-12 20:32:37.791480] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:53.716 [2024-12-12 20:32:37.791492] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:53.716 [2024-12-12 20:32:37.791504] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:53.716 [2024-12-12 20:32:37.791516] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:53.716 [2024-12-12 20:32:37.791527] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:53.716 [2024-12-12 20:32:37.791538] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:53.716 [2024-12-12 20:32:37.791558] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:53.716 [2024-12-12 20:32:37.791570] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:53.716 [2024-12-12 20:32:37.791581] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:53.716 [2024-12-12 20:32:37.791593] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:53.717 [2024-12-12 20:32:37.791605] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:53.717 [2024-12-12 20:32:37.791615] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:53.717 [2024-12-12 20:32:37.791626] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:53.717 [2024-12-12 20:32:37.791649] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:53.717 [2024-12-12 20:32:37.791661] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:53.717 [2024-12-12 20:32:37.791671] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:53.717 [2024-12-12 20:32:37.791683] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:53.717 [2024-12-12 20:32:37.791694] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:53.717 [2024-12-12 20:32:37.791707] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:53.717 [2024-12-12 20:32:37.791719] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:53.717 [2024-12-12 20:32:37.791730] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:53.717 [2024-12-12 20:32:37.791741] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:53.717 [2024-12-12 20:32:37.791753] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:53.717 [2024-12-12 20:32:37.791764] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:53.717 [2024-12-12 20:32:37.791775] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:53.717 [2024-12-12 20:32:37.791786] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:53.717 [2024-12-12 20:32:37.791798] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:53.717 [2024-12-12 20:32:37.791809] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:53.717 [2024-12-12 20:32:37.791821] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:53.717 [2024-12-12 20:32:37.791833] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:53.717 [2024-12-12 20:32:37.791844] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:53.717 [2024-12-12 20:32:37.791856] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:53.717 [2024-12-12 20:32:37.791867] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:53.717 [2024-12-12 20:32:37.791879] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:53.717 [2024-12-12 20:32:37.791890] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:53.717 [2024-12-12 20:32:37.791901] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:53.717 [2024-12-12 20:32:37.791912] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:53.717 [2024-12-12 20:32:37.791924] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:53.717 [2024-12-12 20:32:37.791936] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:53.717 [2024-12-12 20:32:37.791948] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:53.717 [2024-12-12 20:32:37.791960] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:53.717 [2024-12-12 20:32:37.791972] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:53.717 [2024-12-12 20:32:37.791983] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:53.717 [2024-12-12 20:32:37.791994] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:53.717 [2024-12-12 20:32:37.792006] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:53.717 [2024-12-12 20:32:37.792017] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:53.717 [2024-12-12 20:32:37.792030] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:53.717 [2024-12-12 20:32:37.792045] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:53.717 [2024-12-12 20:32:37.792063] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:53.717 [2024-12-12 20:32:37.792075] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:53.717 [2024-12-12 20:32:37.792087] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:53.717 [2024-12-12 20:32:37.792100] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:53.717 [2024-12-12 20:32:37.792113] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:53.717 [2024-12-12 20:32:37.792126] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:53.717 [2024-12-12 20:32:37.792138] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:53.717 [2024-12-12 20:32:37.792150] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:53.717 [2024-12-12 20:32:37.792163] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:53.717 [2024-12-12 20:32:37.792175] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:53.717 [2024-12-12 20:32:37.792187] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:53.717 [2024-12-12 20:32:37.792200] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:53.717 [2024-12-12 20:32:37.792212] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:53.717 [2024-12-12 20:32:37.792225] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:53.717 [2024-12-12 20:32:37.792238] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:53.717 [2024-12-12 20:32:37.792253] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:53.717 [2024-12-12 20:32:37.792267] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:53.717 [2024-12-12 20:32:37.792280] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:53.717 [2024-12-12 20:32:37.792292] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:53.717 [2024-12-12 20:32:37.792304] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:53.717 [2024-12-12 20:32:37.792317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.717 [2024-12-12 20:32:37.792329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:53.717 [2024-12-12 20:32:37.792342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.049 ms 00:21:53.717 [2024-12-12 20:32:37.792354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.717 [2024-12-12 20:32:37.818404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.717 [2024-12-12 20:32:37.818450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:53.717 [2024-12-12 20:32:37.818466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.977 ms 00:21:53.717 [2024-12-12 20:32:37.818481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.717 [2024-12-12 20:32:37.818587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.717 [2024-12-12 20:32:37.818603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:53.717 [2024-12-12 20:32:37.818617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:21:53.717 [2024-12-12 20:32:37.818629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.717 [2024-12-12 20:32:37.863092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.717 [2024-12-12 20:32:37.863256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:53.717 [2024-12-12 20:32:37.863280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.393 ms 00:21:53.717 [2024-12-12 20:32:37.863292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.717 [2024-12-12 20:32:37.863340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.717 [2024-12-12 20:32:37.863355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:53.717 [2024-12-12 20:32:37.863374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:53.717 [2024-12-12 20:32:37.863385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.717 [2024-12-12 20:32:37.863838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.717 [2024-12-12 20:32:37.863865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:53.717 [2024-12-12 20:32:37.863878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.344 ms 00:21:53.717 [2024-12-12 20:32:37.863889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.717 [2024-12-12 20:32:37.864064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.717 [2024-12-12 20:32:37.864085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:53.717 [2024-12-12 20:32:37.864102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 00:21:53.717 [2024-12-12 20:32:37.864114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.717 [2024-12-12 20:32:37.877171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.717 [2024-12-12 20:32:37.877207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:53.717 [2024-12-12 20:32:37.877221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.030 ms 00:21:53.717 [2024-12-12 20:32:37.877233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.717 [2024-12-12 20:32:37.890020] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:21:53.717 [2024-12-12 20:32:37.890151] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:53.718 [2024-12-12 20:32:37.890172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.718 [2024-12-12 20:32:37.890183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:53.718 [2024-12-12 20:32:37.890196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.819 ms 00:21:53.718 [2024-12-12 20:32:37.890207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.718 [2024-12-12 20:32:37.914608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.718 [2024-12-12 20:32:37.914648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:53.718 [2024-12-12 20:32:37.914664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.358 ms 00:21:53.718 [2024-12-12 20:32:37.914675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.718 [2024-12-12 20:32:37.926492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.718 [2024-12-12 20:32:37.926526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:53.718 [2024-12-12 20:32:37.926541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.757 ms 00:21:53.718 [2024-12-12 20:32:37.926551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.718 [2024-12-12 20:32:37.938344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.718 [2024-12-12 20:32:37.938376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:53.718 [2024-12-12 20:32:37.938391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.749 ms 00:21:53.718 [2024-12-12 20:32:37.938402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.718 [2024-12-12 20:32:37.939071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.718 [2024-12-12 20:32:37.939102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:53.718 [2024-12-12 20:32:37.939118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.549 ms 00:21:53.718 [2024-12-12 20:32:37.939129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.976 [2024-12-12 20:32:37.995525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.976 [2024-12-12 20:32:37.995578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:53.976 [2024-12-12 20:32:37.995604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.368 ms 00:21:53.976 [2024-12-12 20:32:37.995616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.976 [2024-12-12 20:32:38.005919] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:53.976 [2024-12-12 20:32:38.008123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.976 [2024-12-12 20:32:38.008159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:53.976 [2024-12-12 20:32:38.008175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.453 ms 00:21:53.976 [2024-12-12 20:32:38.008188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.976 [2024-12-12 20:32:38.008294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.976 [2024-12-12 20:32:38.008310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:53.976 [2024-12-12 20:32:38.008325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:21:53.976 [2024-12-12 20:32:38.008341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.976 [2024-12-12 20:32:38.008457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.976 [2024-12-12 20:32:38.008475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:53.976 [2024-12-12 20:32:38.008489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:21:53.976 [2024-12-12 20:32:38.008502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.976 [2024-12-12 20:32:38.008535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.976 [2024-12-12 20:32:38.008550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:53.976 [2024-12-12 20:32:38.008562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:53.976 [2024-12-12 20:32:38.008592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.976 [2024-12-12 20:32:38.008639] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:53.976 [2024-12-12 20:32:38.008656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.976 [2024-12-12 20:32:38.008668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:53.976 [2024-12-12 20:32:38.008681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:21:53.976 [2024-12-12 20:32:38.008694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.976 [2024-12-12 20:32:38.032497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.976 [2024-12-12 20:32:38.032535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:53.976 [2024-12-12 20:32:38.032557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.775 ms 00:21:53.976 [2024-12-12 20:32:38.032568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.976 [2024-12-12 20:32:38.032657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.976 [2024-12-12 20:32:38.032673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:53.976 [2024-12-12 20:32:38.032687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:21:53.976 [2024-12-12 20:32:38.032699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.976 [2024-12-12 20:32:38.034261] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 267.546 ms, result 0 00:21:55.350  [2024-12-12T20:32:40.511Z] Copying: 20/1024 [MB] (20 MBps) [2024-12-12T20:32:41.451Z] Copying: 40/1024 [MB] (20 MBps) [2024-12-12T20:32:42.386Z] Copying: 55/1024 [MB] (14 MBps) [2024-12-12T20:32:43.319Z] Copying: 67/1024 [MB] (12 MBps) [2024-12-12T20:32:44.254Z] Copying: 79/1024 [MB] (12 MBps) [2024-12-12T20:32:45.628Z] Copying: 92/1024 [MB] (12 MBps) [2024-12-12T20:32:46.563Z] Copying: 104/1024 [MB] (12 MBps) [2024-12-12T20:32:47.497Z] Copying: 116/1024 [MB] (12 MBps) [2024-12-12T20:32:48.431Z] Copying: 128/1024 [MB] (11 MBps) [2024-12-12T20:32:49.441Z] Copying: 140/1024 [MB] (12 MBps) [2024-12-12T20:32:50.375Z] Copying: 152/1024 [MB] (11 MBps) [2024-12-12T20:32:51.309Z] Copying: 164/1024 [MB] (12 MBps) [2024-12-12T20:32:52.243Z] Copying: 176/1024 [MB] (12 MBps) [2024-12-12T20:32:53.619Z] Copying: 188/1024 [MB] (12 MBps) [2024-12-12T20:32:54.555Z] Copying: 200/1024 [MB] (11 MBps) [2024-12-12T20:32:55.488Z] Copying: 212/1024 [MB] (12 MBps) [2024-12-12T20:32:56.434Z] Copying: 224/1024 [MB] (11 MBps) [2024-12-12T20:32:57.367Z] Copying: 236/1024 [MB] (11 MBps) [2024-12-12T20:32:58.301Z] Copying: 248/1024 [MB] (12 MBps) [2024-12-12T20:32:59.237Z] Copying: 264/1024 [MB] (16 MBps) [2024-12-12T20:33:00.611Z] Copying: 276/1024 [MB] (11 MBps) [2024-12-12T20:33:01.544Z] Copying: 288/1024 [MB] (11 MBps) [2024-12-12T20:33:02.477Z] Copying: 299/1024 [MB] (10 MBps) [2024-12-12T20:33:03.411Z] Copying: 310/1024 [MB] (11 MBps) [2024-12-12T20:33:04.371Z] Copying: 321/1024 [MB] (11 MBps) [2024-12-12T20:33:05.307Z] Copying: 333/1024 [MB] (11 MBps) [2024-12-12T20:33:06.243Z] Copying: 344/1024 [MB] (11 MBps) [2024-12-12T20:33:07.618Z] Copying: 355/1024 [MB] (11 MBps) [2024-12-12T20:33:08.552Z] Copying: 367/1024 [MB] (11 MBps) [2024-12-12T20:33:09.486Z] Copying: 378/1024 [MB] (11 MBps) [2024-12-12T20:33:10.420Z] Copying: 390/1024 [MB] (11 MBps) [2024-12-12T20:33:11.355Z] Copying: 401/1024 [MB] (11 MBps) [2024-12-12T20:33:12.290Z] Copying: 413/1024 [MB] (11 MBps) [2024-12-12T20:33:13.223Z] Copying: 424/1024 [MB] (11 MBps) [2024-12-12T20:33:14.597Z] Copying: 436/1024 [MB] (11 MBps) [2024-12-12T20:33:15.531Z] Copying: 447/1024 [MB] (11 MBps) [2024-12-12T20:33:16.465Z] Copying: 459/1024 [MB] (11 MBps) [2024-12-12T20:33:17.400Z] Copying: 471/1024 [MB] (12 MBps) [2024-12-12T20:33:18.332Z] Copying: 483/1024 [MB] (11 MBps) [2024-12-12T20:33:19.350Z] Copying: 494/1024 [MB] (11 MBps) [2024-12-12T20:33:20.284Z] Copying: 506/1024 [MB] (11 MBps) [2024-12-12T20:33:21.218Z] Copying: 517/1024 [MB] (11 MBps) [2024-12-12T20:33:22.593Z] Copying: 529/1024 [MB] (11 MBps) [2024-12-12T20:33:23.528Z] Copying: 540/1024 [MB] (11 MBps) [2024-12-12T20:33:24.462Z] Copying: 552/1024 [MB] (11 MBps) [2024-12-12T20:33:25.396Z] Copying: 564/1024 [MB] (11 MBps) [2024-12-12T20:33:26.331Z] Copying: 575/1024 [MB] (11 MBps) [2024-12-12T20:33:27.265Z] Copying: 587/1024 [MB] (11 MBps) [2024-12-12T20:33:28.639Z] Copying: 599/1024 [MB] (11 MBps) [2024-12-12T20:33:29.573Z] Copying: 610/1024 [MB] (11 MBps) [2024-12-12T20:33:30.508Z] Copying: 622/1024 [MB] (11 MBps) [2024-12-12T20:33:31.442Z] Copying: 633/1024 [MB] (11 MBps) [2024-12-12T20:33:32.377Z] Copying: 645/1024 [MB] (12 MBps) [2024-12-12T20:33:33.465Z] Copying: 663/1024 [MB] (17 MBps) [2024-12-12T20:33:34.414Z] Copying: 675/1024 [MB] (11 MBps) [2024-12-12T20:33:35.346Z] Copying: 687/1024 [MB] (11 MBps) [2024-12-12T20:33:36.277Z] Copying: 699/1024 [MB] (12 MBps) [2024-12-12T20:33:37.211Z] Copying: 719/1024 [MB] (20 MBps) [2024-12-12T20:33:38.584Z] Copying: 744/1024 [MB] (24 MBps) [2024-12-12T20:33:39.516Z] Copying: 766/1024 [MB] (22 MBps) [2024-12-12T20:33:40.449Z] Copying: 795/1024 [MB] (29 MBps) [2024-12-12T20:33:41.384Z] Copying: 822/1024 [MB] (26 MBps) [2024-12-12T20:33:42.341Z] Copying: 847/1024 [MB] (24 MBps) [2024-12-12T20:33:43.273Z] Copying: 866/1024 [MB] (19 MBps) [2024-12-12T20:33:44.646Z] Copying: 889/1024 [MB] (23 MBps) [2024-12-12T20:33:45.212Z] Copying: 907/1024 [MB] (17 MBps) [2024-12-12T20:33:46.584Z] Copying: 923/1024 [MB] (16 MBps) [2024-12-12T20:33:47.517Z] Copying: 941/1024 [MB] (18 MBps) [2024-12-12T20:33:48.451Z] Copying: 956/1024 [MB] (14 MBps) [2024-12-12T20:33:49.407Z] Copying: 976/1024 [MB] (20 MBps) [2024-12-12T20:33:50.340Z] Copying: 989/1024 [MB] (12 MBps) [2024-12-12T20:33:51.273Z] Copying: 1003/1024 [MB] (14 MBps) [2024-12-12T20:33:51.840Z] Copying: 1017/1024 [MB] (13 MBps) [2024-12-12T20:33:51.840Z] Copying: 1024/1024 [MB] (average 13 MBps)[2024-12-12 20:33:51.682247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.612 [2024-12-12 20:33:51.682325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:07.612 [2024-12-12 20:33:51.682349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:07.612 [2024-12-12 20:33:51.682363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.612 [2024-12-12 20:33:51.682402] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:07.612 [2024-12-12 20:33:51.688245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.612 [2024-12-12 20:33:51.688282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:07.612 [2024-12-12 20:33:51.688292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.787 ms 00:23:07.613 [2024-12-12 20:33:51.688300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.613 [2024-12-12 20:33:51.688527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.613 [2024-12-12 20:33:51.688537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:07.613 [2024-12-12 20:33:51.688546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.205 ms 00:23:07.613 [2024-12-12 20:33:51.688554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.613 [2024-12-12 20:33:51.691985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.613 [2024-12-12 20:33:51.692004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:07.613 [2024-12-12 20:33:51.692013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.418 ms 00:23:07.613 [2024-12-12 20:33:51.692025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.613 [2024-12-12 20:33:51.698098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.613 [2024-12-12 20:33:51.698237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:07.613 [2024-12-12 20:33:51.698254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.057 ms 00:23:07.613 [2024-12-12 20:33:51.698261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.613 [2024-12-12 20:33:51.722025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.613 [2024-12-12 20:33:51.722056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:07.613 [2024-12-12 20:33:51.722067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.713 ms 00:23:07.613 [2024-12-12 20:33:51.722074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.613 [2024-12-12 20:33:51.735731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.613 [2024-12-12 20:33:51.735776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:07.613 [2024-12-12 20:33:51.735787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.637 ms 00:23:07.613 [2024-12-12 20:33:51.735794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.613 [2024-12-12 20:33:51.735923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.613 [2024-12-12 20:33:51.735933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:07.613 [2024-12-12 20:33:51.735941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:23:07.613 [2024-12-12 20:33:51.735948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.613 [2024-12-12 20:33:51.759499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.613 [2024-12-12 20:33:51.759529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:07.613 [2024-12-12 20:33:51.759540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.538 ms 00:23:07.613 [2024-12-12 20:33:51.759547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.613 [2024-12-12 20:33:51.782713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.613 [2024-12-12 20:33:51.782743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:07.613 [2024-12-12 20:33:51.782753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.148 ms 00:23:07.613 [2024-12-12 20:33:51.782760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.613 [2024-12-12 20:33:51.805337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.613 [2024-12-12 20:33:51.805478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:07.613 [2024-12-12 20:33:51.805504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.559 ms 00:23:07.613 [2024-12-12 20:33:51.805512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.613 [2024-12-12 20:33:51.827549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.613 [2024-12-12 20:33:51.827576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:07.613 [2024-12-12 20:33:51.827586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.995 ms 00:23:07.613 [2024-12-12 20:33:51.827594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.613 [2024-12-12 20:33:51.827612] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:07.613 [2024-12-12 20:33:51.827629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:07.613 [2024-12-12 20:33:51.827641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:07.613 [2024-12-12 20:33:51.827649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:07.613 [2024-12-12 20:33:51.827657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:07.613 [2024-12-12 20:33:51.827665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:07.613 [2024-12-12 20:33:51.827672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:07.613 [2024-12-12 20:33:51.827679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:07.613 [2024-12-12 20:33:51.827686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:07.613 [2024-12-12 20:33:51.827693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:07.613 [2024-12-12 20:33:51.827700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:07.613 [2024-12-12 20:33:51.827708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:07.613 [2024-12-12 20:33:51.827715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:07.613 [2024-12-12 20:33:51.827723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:07.613 [2024-12-12 20:33:51.827730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:07.613 [2024-12-12 20:33:51.827744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:07.613 [2024-12-12 20:33:51.827751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:07.613 [2024-12-12 20:33:51.827759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:07.613 [2024-12-12 20:33:51.827766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:07.613 [2024-12-12 20:33:51.827773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:07.613 [2024-12-12 20:33:51.827781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:07.613 [2024-12-12 20:33:51.827789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:07.613 [2024-12-12 20:33:51.827797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:07.613 [2024-12-12 20:33:51.827804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:07.613 [2024-12-12 20:33:51.827811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:07.613 [2024-12-12 20:33:51.827818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:07.613 [2024-12-12 20:33:51.827826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:07.613 [2024-12-12 20:33:51.827834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:07.613 [2024-12-12 20:33:51.827841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:07.613 [2024-12-12 20:33:51.827849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:07.613 [2024-12-12 20:33:51.827857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:07.613 [2024-12-12 20:33:51.827864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:07.613 [2024-12-12 20:33:51.827871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:07.613 [2024-12-12 20:33:51.827879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:07.613 [2024-12-12 20:33:51.827886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:07.613 [2024-12-12 20:33:51.827894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:07.613 [2024-12-12 20:33:51.827901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:07.613 [2024-12-12 20:33:51.827908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:07.613 [2024-12-12 20:33:51.827915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:07.613 [2024-12-12 20:33:51.827922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:07.613 [2024-12-12 20:33:51.827929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:07.613 [2024-12-12 20:33:51.827936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:07.613 [2024-12-12 20:33:51.827943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:07.613 [2024-12-12 20:33:51.827950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:07.613 [2024-12-12 20:33:51.827957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:07.613 [2024-12-12 20:33:51.827964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:07.613 [2024-12-12 20:33:51.827972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:07.613 [2024-12-12 20:33:51.827979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:07.613 [2024-12-12 20:33:51.827986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:07.613 [2024-12-12 20:33:51.827993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:07.613 [2024-12-12 20:33:51.828001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:07.613 [2024-12-12 20:33:51.828008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:07.613 [2024-12-12 20:33:51.828016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:07.613 [2024-12-12 20:33:51.828023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:07.613 [2024-12-12 20:33:51.828030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:07.613 [2024-12-12 20:33:51.828037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:07.613 [2024-12-12 20:33:51.828046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:07.613 [2024-12-12 20:33:51.828053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:07.614 [2024-12-12 20:33:51.828060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:07.614 [2024-12-12 20:33:51.828067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:07.614 [2024-12-12 20:33:51.828074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:07.614 [2024-12-12 20:33:51.828082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:07.614 [2024-12-12 20:33:51.828090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:07.614 [2024-12-12 20:33:51.828097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:07.614 [2024-12-12 20:33:51.828104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:07.614 [2024-12-12 20:33:51.828111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:07.614 [2024-12-12 20:33:51.828118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:07.614 [2024-12-12 20:33:51.828125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:07.614 [2024-12-12 20:33:51.828133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:07.614 [2024-12-12 20:33:51.828140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:07.614 [2024-12-12 20:33:51.828147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:07.614 [2024-12-12 20:33:51.828154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:07.614 [2024-12-12 20:33:51.828161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:07.614 [2024-12-12 20:33:51.828168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:07.614 [2024-12-12 20:33:51.828175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:07.614 [2024-12-12 20:33:51.828182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:07.614 [2024-12-12 20:33:51.828189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:07.614 [2024-12-12 20:33:51.828197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:07.614 [2024-12-12 20:33:51.828204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:07.614 [2024-12-12 20:33:51.828211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:07.614 [2024-12-12 20:33:51.828217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:07.614 [2024-12-12 20:33:51.828224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:07.614 [2024-12-12 20:33:51.828231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:07.614 [2024-12-12 20:33:51.828239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:07.614 [2024-12-12 20:33:51.828247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:07.614 [2024-12-12 20:33:51.828255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:07.614 [2024-12-12 20:33:51.828262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:07.614 [2024-12-12 20:33:51.828270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:07.614 [2024-12-12 20:33:51.828277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:07.614 [2024-12-12 20:33:51.828284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:07.614 [2024-12-12 20:33:51.828291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:07.614 [2024-12-12 20:33:51.828298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:07.614 [2024-12-12 20:33:51.828305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:07.614 [2024-12-12 20:33:51.828312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:07.614 [2024-12-12 20:33:51.828319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:07.614 [2024-12-12 20:33:51.828326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:07.614 [2024-12-12 20:33:51.828333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:07.614 [2024-12-12 20:33:51.828340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:07.614 [2024-12-12 20:33:51.828348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:07.614 [2024-12-12 20:33:51.828355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:07.614 [2024-12-12 20:33:51.828362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:07.614 [2024-12-12 20:33:51.828377] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:07.614 [2024-12-12 20:33:51.828385] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9c27c25e-6895-4077-a8c7-dd2dac7fe71c 00:23:07.614 [2024-12-12 20:33:51.828392] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:07.614 [2024-12-12 20:33:51.828399] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:07.614 [2024-12-12 20:33:51.828406] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:07.614 [2024-12-12 20:33:51.828427] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:07.614 [2024-12-12 20:33:51.828441] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:07.614 [2024-12-12 20:33:51.828449] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:07.614 [2024-12-12 20:33:51.828456] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:07.614 [2024-12-12 20:33:51.828463] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:07.614 [2024-12-12 20:33:51.828469] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:07.614 [2024-12-12 20:33:51.828475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.614 [2024-12-12 20:33:51.828482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:07.614 [2024-12-12 20:33:51.828490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.864 ms 00:23:07.614 [2024-12-12 20:33:51.828499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.873 [2024-12-12 20:33:51.840820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.873 [2024-12-12 20:33:51.840848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:07.873 [2024-12-12 20:33:51.840859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.300 ms 00:23:07.873 [2024-12-12 20:33:51.840866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.873 [2024-12-12 20:33:51.841197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.873 [2024-12-12 20:33:51.841205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:07.873 [2024-12-12 20:33:51.841218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.316 ms 00:23:07.873 [2024-12-12 20:33:51.841225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.873 [2024-12-12 20:33:51.873985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:07.873 [2024-12-12 20:33:51.874017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:07.873 [2024-12-12 20:33:51.874027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:07.873 [2024-12-12 20:33:51.874034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.873 [2024-12-12 20:33:51.874086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:07.873 [2024-12-12 20:33:51.874094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:07.873 [2024-12-12 20:33:51.874106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:07.873 [2024-12-12 20:33:51.874113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.873 [2024-12-12 20:33:51.874166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:07.873 [2024-12-12 20:33:51.874175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:07.873 [2024-12-12 20:33:51.874183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:07.873 [2024-12-12 20:33:51.874190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.873 [2024-12-12 20:33:51.874204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:07.873 [2024-12-12 20:33:51.874211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:07.873 [2024-12-12 20:33:51.874219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:07.873 [2024-12-12 20:33:51.874228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.873 [2024-12-12 20:33:51.951455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:07.873 [2024-12-12 20:33:51.951491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:07.873 [2024-12-12 20:33:51.951501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:07.873 [2024-12-12 20:33:51.951509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.873 [2024-12-12 20:33:52.014831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:07.873 [2024-12-12 20:33:52.015001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:07.873 [2024-12-12 20:33:52.015021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:07.873 [2024-12-12 20:33:52.015029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.873 [2024-12-12 20:33:52.015097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:07.873 [2024-12-12 20:33:52.015106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:07.873 [2024-12-12 20:33:52.015114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:07.873 [2024-12-12 20:33:52.015122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.873 [2024-12-12 20:33:52.015153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:07.873 [2024-12-12 20:33:52.015161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:07.873 [2024-12-12 20:33:52.015168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:07.873 [2024-12-12 20:33:52.015176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.873 [2024-12-12 20:33:52.015267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:07.873 [2024-12-12 20:33:52.015277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:07.873 [2024-12-12 20:33:52.015284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:07.873 [2024-12-12 20:33:52.015292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.873 [2024-12-12 20:33:52.015318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:07.873 [2024-12-12 20:33:52.015326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:07.873 [2024-12-12 20:33:52.015334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:07.873 [2024-12-12 20:33:52.015341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.873 [2024-12-12 20:33:52.015376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:07.873 [2024-12-12 20:33:52.015384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:07.873 [2024-12-12 20:33:52.015392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:07.873 [2024-12-12 20:33:52.015399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.873 [2024-12-12 20:33:52.015455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:07.873 [2024-12-12 20:33:52.015465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:07.873 [2024-12-12 20:33:52.015473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:07.873 [2024-12-12 20:33:52.015480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.873 [2024-12-12 20:33:52.015589] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 333.342 ms, result 0 00:23:08.807 00:23:08.807 00:23:08.807 20:33:52 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:10.706 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:23:10.706 20:33:54 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:23:10.706 [2024-12-12 20:33:54.930235] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:23:10.706 [2024-12-12 20:33:54.930537] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80293 ] 00:23:10.964 [2024-12-12 20:33:55.090192] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:10.964 [2024-12-12 20:33:55.186574] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:23:11.221 [2024-12-12 20:33:55.443701] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:11.221 [2024-12-12 20:33:55.443781] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:11.480 [2024-12-12 20:33:55.600100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.480 [2024-12-12 20:33:55.600149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:11.480 [2024-12-12 20:33:55.600161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:11.480 [2024-12-12 20:33:55.600169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.480 [2024-12-12 20:33:55.600211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.480 [2024-12-12 20:33:55.600223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:11.480 [2024-12-12 20:33:55.600231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:23:11.480 [2024-12-12 20:33:55.600239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.480 [2024-12-12 20:33:55.600255] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:11.480 [2024-12-12 20:33:55.601187] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:11.480 [2024-12-12 20:33:55.601228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.480 [2024-12-12 20:33:55.601238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:11.480 [2024-12-12 20:33:55.601248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.977 ms 00:23:11.480 [2024-12-12 20:33:55.601256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.480 [2024-12-12 20:33:55.602327] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:11.480 [2024-12-12 20:33:55.615088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.480 [2024-12-12 20:33:55.615122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:11.480 [2024-12-12 20:33:55.615134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.762 ms 00:23:11.480 [2024-12-12 20:33:55.615142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.480 [2024-12-12 20:33:55.615198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.480 [2024-12-12 20:33:55.615208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:11.480 [2024-12-12 20:33:55.615216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:23:11.480 [2024-12-12 20:33:55.615222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.480 [2024-12-12 20:33:55.620196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.480 [2024-12-12 20:33:55.620226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:11.480 [2024-12-12 20:33:55.620236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.924 ms 00:23:11.480 [2024-12-12 20:33:55.620247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.480 [2024-12-12 20:33:55.620311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.480 [2024-12-12 20:33:55.620320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:11.480 [2024-12-12 20:33:55.620328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:23:11.480 [2024-12-12 20:33:55.620335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.480 [2024-12-12 20:33:55.620383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.480 [2024-12-12 20:33:55.620392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:11.480 [2024-12-12 20:33:55.620400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:11.480 [2024-12-12 20:33:55.620407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.480 [2024-12-12 20:33:55.620452] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:11.480 [2024-12-12 20:33:55.623771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.480 [2024-12-12 20:33:55.623797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:11.480 [2024-12-12 20:33:55.623809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.326 ms 00:23:11.480 [2024-12-12 20:33:55.623816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.480 [2024-12-12 20:33:55.623846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.480 [2024-12-12 20:33:55.623854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:11.480 [2024-12-12 20:33:55.623862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:11.480 [2024-12-12 20:33:55.623869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.480 [2024-12-12 20:33:55.623887] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:11.480 [2024-12-12 20:33:55.623905] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:11.480 [2024-12-12 20:33:55.623938] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:11.480 [2024-12-12 20:33:55.623955] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:11.480 [2024-12-12 20:33:55.624058] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:11.480 [2024-12-12 20:33:55.624068] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:11.480 [2024-12-12 20:33:55.624078] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:11.480 [2024-12-12 20:33:55.624087] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:11.480 [2024-12-12 20:33:55.624096] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:11.480 [2024-12-12 20:33:55.624103] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:11.480 [2024-12-12 20:33:55.624111] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:11.481 [2024-12-12 20:33:55.624117] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:11.481 [2024-12-12 20:33:55.624127] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:11.481 [2024-12-12 20:33:55.624134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.481 [2024-12-12 20:33:55.624141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:11.481 [2024-12-12 20:33:55.624149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.249 ms 00:23:11.481 [2024-12-12 20:33:55.624156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.481 [2024-12-12 20:33:55.624237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.481 [2024-12-12 20:33:55.624245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:11.481 [2024-12-12 20:33:55.624253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:23:11.481 [2024-12-12 20:33:55.624259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.481 [2024-12-12 20:33:55.624367] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:11.481 [2024-12-12 20:33:55.624377] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:11.481 [2024-12-12 20:33:55.624385] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:11.481 [2024-12-12 20:33:55.624392] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:11.481 [2024-12-12 20:33:55.624400] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:11.481 [2024-12-12 20:33:55.624406] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:11.481 [2024-12-12 20:33:55.624438] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:11.481 [2024-12-12 20:33:55.624447] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:11.481 [2024-12-12 20:33:55.624454] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:11.481 [2024-12-12 20:33:55.624460] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:11.481 [2024-12-12 20:33:55.624467] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:11.481 [2024-12-12 20:33:55.624474] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:11.481 [2024-12-12 20:33:55.624481] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:11.481 [2024-12-12 20:33:55.624494] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:11.481 [2024-12-12 20:33:55.624501] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:11.481 [2024-12-12 20:33:55.624508] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:11.481 [2024-12-12 20:33:55.624514] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:11.481 [2024-12-12 20:33:55.624521] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:11.481 [2024-12-12 20:33:55.624527] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:11.481 [2024-12-12 20:33:55.624533] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:11.481 [2024-12-12 20:33:55.624540] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:11.481 [2024-12-12 20:33:55.624547] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:11.481 [2024-12-12 20:33:55.624553] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:11.481 [2024-12-12 20:33:55.624560] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:11.481 [2024-12-12 20:33:55.624566] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:11.481 [2024-12-12 20:33:55.624572] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:11.481 [2024-12-12 20:33:55.624579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:11.481 [2024-12-12 20:33:55.624585] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:11.481 [2024-12-12 20:33:55.624591] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:11.481 [2024-12-12 20:33:55.624598] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:11.481 [2024-12-12 20:33:55.624604] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:11.481 [2024-12-12 20:33:55.624610] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:11.481 [2024-12-12 20:33:55.624617] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:11.481 [2024-12-12 20:33:55.624623] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:11.481 [2024-12-12 20:33:55.624629] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:11.481 [2024-12-12 20:33:55.624635] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:11.481 [2024-12-12 20:33:55.624642] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:11.481 [2024-12-12 20:33:55.624649] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:11.481 [2024-12-12 20:33:55.624655] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:11.481 [2024-12-12 20:33:55.624661] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:11.481 [2024-12-12 20:33:55.624667] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:11.481 [2024-12-12 20:33:55.624674] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:11.481 [2024-12-12 20:33:55.624681] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:11.481 [2024-12-12 20:33:55.624687] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:11.481 [2024-12-12 20:33:55.624695] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:11.481 [2024-12-12 20:33:55.624702] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:11.481 [2024-12-12 20:33:55.624709] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:11.481 [2024-12-12 20:33:55.624716] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:11.481 [2024-12-12 20:33:55.624723] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:11.481 [2024-12-12 20:33:55.624729] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:11.481 [2024-12-12 20:33:55.624735] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:11.481 [2024-12-12 20:33:55.624742] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:11.481 [2024-12-12 20:33:55.624748] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:11.481 [2024-12-12 20:33:55.624756] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:11.481 [2024-12-12 20:33:55.624765] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:11.481 [2024-12-12 20:33:55.624776] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:11.481 [2024-12-12 20:33:55.624783] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:11.481 [2024-12-12 20:33:55.624790] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:11.481 [2024-12-12 20:33:55.624797] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:11.481 [2024-12-12 20:33:55.624804] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:11.481 [2024-12-12 20:33:55.624811] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:11.481 [2024-12-12 20:33:55.624818] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:11.481 [2024-12-12 20:33:55.624825] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:11.481 [2024-12-12 20:33:55.624831] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:11.481 [2024-12-12 20:33:55.624838] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:11.481 [2024-12-12 20:33:55.624845] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:11.481 [2024-12-12 20:33:55.624852] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:11.481 [2024-12-12 20:33:55.624858] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:11.481 [2024-12-12 20:33:55.624866] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:11.481 [2024-12-12 20:33:55.624872] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:11.481 [2024-12-12 20:33:55.624880] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:11.481 [2024-12-12 20:33:55.624888] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:11.481 [2024-12-12 20:33:55.624895] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:11.481 [2024-12-12 20:33:55.624902] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:11.481 [2024-12-12 20:33:55.624910] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:11.481 [2024-12-12 20:33:55.624917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.481 [2024-12-12 20:33:55.624923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:11.481 [2024-12-12 20:33:55.624931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.619 ms 00:23:11.481 [2024-12-12 20:33:55.624938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.481 [2024-12-12 20:33:55.650906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.481 [2024-12-12 20:33:55.651034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:11.481 [2024-12-12 20:33:55.651088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.927 ms 00:23:11.481 [2024-12-12 20:33:55.651116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.481 [2024-12-12 20:33:55.651212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.481 [2024-12-12 20:33:55.651232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:11.481 [2024-12-12 20:33:55.651252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:23:11.481 [2024-12-12 20:33:55.651270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.481 [2024-12-12 20:33:55.694090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.481 [2024-12-12 20:33:55.694235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:11.481 [2024-12-12 20:33:55.694296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.757 ms 00:23:11.481 [2024-12-12 20:33:55.694321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.481 [2024-12-12 20:33:55.694374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.481 [2024-12-12 20:33:55.694398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:11.482 [2024-12-12 20:33:55.694441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:11.482 [2024-12-12 20:33:55.694461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.482 [2024-12-12 20:33:55.694831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.482 [2024-12-12 20:33:55.694878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:11.482 [2024-12-12 20:33:55.694900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.294 ms 00:23:11.482 [2024-12-12 20:33:55.694918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.482 [2024-12-12 20:33:55.695056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.482 [2024-12-12 20:33:55.695135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:11.482 [2024-12-12 20:33:55.695162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:23:11.482 [2024-12-12 20:33:55.695181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.740 [2024-12-12 20:33:55.708157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.740 [2024-12-12 20:33:55.708272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:11.740 [2024-12-12 20:33:55.708324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.945 ms 00:23:11.740 [2024-12-12 20:33:55.708345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.740 [2024-12-12 20:33:55.720962] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:11.740 [2024-12-12 20:33:55.721090] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:11.740 [2024-12-12 20:33:55.721147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.740 [2024-12-12 20:33:55.721168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:11.740 [2024-12-12 20:33:55.721187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.685 ms 00:23:11.740 [2024-12-12 20:33:55.721205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.740 [2024-12-12 20:33:55.745191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.740 [2024-12-12 20:33:55.745298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:11.740 [2024-12-12 20:33:55.745347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.944 ms 00:23:11.740 [2024-12-12 20:33:55.745369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.740 [2024-12-12 20:33:55.757462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.740 [2024-12-12 20:33:55.757591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:11.740 [2024-12-12 20:33:55.757647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.732 ms 00:23:11.740 [2024-12-12 20:33:55.757669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.740 [2024-12-12 20:33:55.770153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.740 [2024-12-12 20:33:55.770282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:11.740 [2024-12-12 20:33:55.770336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.958 ms 00:23:11.740 [2024-12-12 20:33:55.770358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.740 [2024-12-12 20:33:55.770985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.740 [2024-12-12 20:33:55.771065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:11.740 [2024-12-12 20:33:55.771116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.519 ms 00:23:11.740 [2024-12-12 20:33:55.771138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.740 [2024-12-12 20:33:55.826182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.740 [2024-12-12 20:33:55.826329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:11.740 [2024-12-12 20:33:55.826385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.012 ms 00:23:11.740 [2024-12-12 20:33:55.826408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.740 [2024-12-12 20:33:55.837218] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:11.740 [2024-12-12 20:33:55.839643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.740 [2024-12-12 20:33:55.839743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:11.740 [2024-12-12 20:33:55.839813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.675 ms 00:23:11.740 [2024-12-12 20:33:55.839835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.740 [2024-12-12 20:33:55.839947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.740 [2024-12-12 20:33:55.839975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:11.740 [2024-12-12 20:33:55.839996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:11.740 [2024-12-12 20:33:55.840017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.740 [2024-12-12 20:33:55.840098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.740 [2024-12-12 20:33:55.840197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:11.740 [2024-12-12 20:33:55.840217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:23:11.740 [2024-12-12 20:33:55.840236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.740 [2024-12-12 20:33:55.840268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.740 [2024-12-12 20:33:55.840289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:11.740 [2024-12-12 20:33:55.840353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:11.740 [2024-12-12 20:33:55.840376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.740 [2024-12-12 20:33:55.840435] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:11.740 [2024-12-12 20:33:55.840460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.740 [2024-12-12 20:33:55.840478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:11.740 [2024-12-12 20:33:55.840497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:23:11.740 [2024-12-12 20:33:55.840516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.740 [2024-12-12 20:33:55.864258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.740 [2024-12-12 20:33:55.864381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:11.740 [2024-12-12 20:33:55.864455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.712 ms 00:23:11.740 [2024-12-12 20:33:55.864478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.740 [2024-12-12 20:33:55.864553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.740 [2024-12-12 20:33:55.864578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:11.740 [2024-12-12 20:33:55.864597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:23:11.740 [2024-12-12 20:33:55.864616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.740 [2024-12-12 20:33:55.865606] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 265.082 ms, result 0 00:23:12.713  [2024-12-12T20:33:58.315Z] Copying: 19/1024 [MB] (19 MBps) [2024-12-12T20:33:58.879Z] Copying: 55/1024 [MB] (36 MBps) [2024-12-12T20:34:00.251Z] Copying: 70/1024 [MB] (15 MBps) [2024-12-12T20:34:01.185Z] Copying: 87/1024 [MB] (16 MBps) [2024-12-12T20:34:02.118Z] Copying: 102/1024 [MB] (14 MBps) [2024-12-12T20:34:03.053Z] Copying: 130/1024 [MB] (27 MBps) [2024-12-12T20:34:04.005Z] Copying: 152/1024 [MB] (22 MBps) [2024-12-12T20:34:04.939Z] Copying: 174/1024 [MB] (21 MBps) [2024-12-12T20:34:06.313Z] Copying: 185/1024 [MB] (10 MBps) [2024-12-12T20:34:06.878Z] Copying: 196/1024 [MB] (10 MBps) [2024-12-12T20:34:08.252Z] Copying: 218/1024 [MB] (22 MBps) [2024-12-12T20:34:09.187Z] Copying: 239/1024 [MB] (21 MBps) [2024-12-12T20:34:10.121Z] Copying: 262/1024 [MB] (22 MBps) [2024-12-12T20:34:11.056Z] Copying: 289/1024 [MB] (27 MBps) [2024-12-12T20:34:11.990Z] Copying: 312/1024 [MB] (22 MBps) [2024-12-12T20:34:12.924Z] Copying: 336/1024 [MB] (24 MBps) [2024-12-12T20:34:14.297Z] Copying: 360/1024 [MB] (23 MBps) [2024-12-12T20:34:15.231Z] Copying: 377/1024 [MB] (17 MBps) [2024-12-12T20:34:16.164Z] Copying: 402/1024 [MB] (24 MBps) [2024-12-12T20:34:17.097Z] Copying: 422/1024 [MB] (20 MBps) [2024-12-12T20:34:18.030Z] Copying: 445/1024 [MB] (22 MBps) [2024-12-12T20:34:19.031Z] Copying: 464/1024 [MB] (18 MBps) [2024-12-12T20:34:19.964Z] Copying: 475/1024 [MB] (11 MBps) [2024-12-12T20:34:20.898Z] Copying: 487/1024 [MB] (11 MBps) [2024-12-12T20:34:22.272Z] Copying: 498/1024 [MB] (11 MBps) [2024-12-12T20:34:23.205Z] Copying: 511/1024 [MB] (12 MBps) [2024-12-12T20:34:24.140Z] Copying: 526/1024 [MB] (14 MBps) [2024-12-12T20:34:25.073Z] Copying: 541/1024 [MB] (15 MBps) [2024-12-12T20:34:26.006Z] Copying: 552/1024 [MB] (11 MBps) [2024-12-12T20:34:26.964Z] Copying: 566/1024 [MB] (13 MBps) [2024-12-12T20:34:27.898Z] Copying: 583/1024 [MB] (17 MBps) [2024-12-12T20:34:29.271Z] Copying: 598/1024 [MB] (14 MBps) [2024-12-12T20:34:30.206Z] Copying: 610/1024 [MB] (11 MBps) [2024-12-12T20:34:31.140Z] Copying: 622/1024 [MB] (11 MBps) [2024-12-12T20:34:32.073Z] Copying: 644/1024 [MB] (22 MBps) [2024-12-12T20:34:33.007Z] Copying: 661/1024 [MB] (17 MBps) [2024-12-12T20:34:33.942Z] Copying: 678/1024 [MB] (16 MBps) [2024-12-12T20:34:35.316Z] Copying: 721/1024 [MB] (43 MBps) [2024-12-12T20:34:35.881Z] Copying: 761/1024 [MB] (40 MBps) [2024-12-12T20:34:37.255Z] Copying: 785/1024 [MB] (23 MBps) [2024-12-12T20:34:38.190Z] Copying: 799/1024 [MB] (14 MBps) [2024-12-12T20:34:39.124Z] Copying: 820/1024 [MB] (20 MBps) [2024-12-12T20:34:40.057Z] Copying: 844/1024 [MB] (24 MBps) [2024-12-12T20:34:40.991Z] Copying: 860/1024 [MB] (15 MBps) [2024-12-12T20:34:41.923Z] Copying: 884/1024 [MB] (23 MBps) [2024-12-12T20:34:43.297Z] Copying: 895/1024 [MB] (11 MBps) [2024-12-12T20:34:44.231Z] Copying: 911/1024 [MB] (15 MBps) [2024-12-12T20:34:45.165Z] Copying: 944/1024 [MB] (32 MBps) [2024-12-12T20:34:46.099Z] Copying: 975/1024 [MB] (31 MBps) [2024-12-12T20:34:47.033Z] Copying: 1003/1024 [MB] (27 MBps) [2024-12-12T20:34:47.033Z] Copying: 1024/1024 [MB] (average 20 MBps)[2024-12-12 20:34:46.712043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.805 [2024-12-12 20:34:46.712092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:02.805 [2024-12-12 20:34:46.712105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:02.805 [2024-12-12 20:34:46.712113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.805 [2024-12-12 20:34:46.712133] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:02.805 [2024-12-12 20:34:46.714739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.805 [2024-12-12 20:34:46.714774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:02.805 [2024-12-12 20:34:46.714784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.592 ms 00:24:02.805 [2024-12-12 20:34:46.714792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.805 [2024-12-12 20:34:46.716094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.805 [2024-12-12 20:34:46.716127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:02.805 [2024-12-12 20:34:46.716137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.283 ms 00:24:02.805 [2024-12-12 20:34:46.716145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.805 [2024-12-12 20:34:46.729349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.805 [2024-12-12 20:34:46.729393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:02.805 [2024-12-12 20:34:46.729403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.190 ms 00:24:02.805 [2024-12-12 20:34:46.729429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.805 [2024-12-12 20:34:46.735517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.805 [2024-12-12 20:34:46.735542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:02.805 [2024-12-12 20:34:46.735552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.065 ms 00:24:02.805 [2024-12-12 20:34:46.735560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.805 [2024-12-12 20:34:46.759124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.805 [2024-12-12 20:34:46.759156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:02.805 [2024-12-12 20:34:46.759167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.500 ms 00:24:02.805 [2024-12-12 20:34:46.759175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.805 [2024-12-12 20:34:46.773547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.805 [2024-12-12 20:34:46.773578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:02.805 [2024-12-12 20:34:46.773589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.341 ms 00:24:02.805 [2024-12-12 20:34:46.773596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.805 [2024-12-12 20:34:46.775093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.805 [2024-12-12 20:34:46.775123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:02.805 [2024-12-12 20:34:46.775132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.459 ms 00:24:02.805 [2024-12-12 20:34:46.775139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.805 [2024-12-12 20:34:46.798409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.805 [2024-12-12 20:34:46.798442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:02.805 [2024-12-12 20:34:46.798453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.256 ms 00:24:02.805 [2024-12-12 20:34:46.798459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.805 [2024-12-12 20:34:46.821519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.805 [2024-12-12 20:34:46.821568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:02.805 [2024-12-12 20:34:46.821578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.026 ms 00:24:02.805 [2024-12-12 20:34:46.821585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.805 [2024-12-12 20:34:46.843900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.806 [2024-12-12 20:34:46.843928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:02.806 [2024-12-12 20:34:46.843938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.286 ms 00:24:02.806 [2024-12-12 20:34:46.843944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.806 [2024-12-12 20:34:46.867025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.806 [2024-12-12 20:34:46.867054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:02.806 [2024-12-12 20:34:46.867063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.032 ms 00:24:02.806 [2024-12-12 20:34:46.867070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.806 [2024-12-12 20:34:46.867097] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:02.806 [2024-12-12 20:34:46.867114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 512 / 261120 wr_cnt: 1 state: open 00:24:02.806 [2024-12-12 20:34:46.867127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:02.806 [2024-12-12 20:34:46.867134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:02.806 [2024-12-12 20:34:46.867142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:02.806 [2024-12-12 20:34:46.867149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:02.806 [2024-12-12 20:34:46.867156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:02.806 [2024-12-12 20:34:46.867164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:02.806 [2024-12-12 20:34:46.867171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:02.806 [2024-12-12 20:34:46.867178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:02.806 [2024-12-12 20:34:46.867186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:02.806 [2024-12-12 20:34:46.867193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:02.806 [2024-12-12 20:34:46.867200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:02.806 [2024-12-12 20:34:46.867207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:02.806 [2024-12-12 20:34:46.867214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:02.806 [2024-12-12 20:34:46.867222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:02.806 [2024-12-12 20:34:46.867229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:02.806 [2024-12-12 20:34:46.867237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:02.806 [2024-12-12 20:34:46.867244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:02.806 [2024-12-12 20:34:46.867251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:02.806 [2024-12-12 20:34:46.867258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:02.806 [2024-12-12 20:34:46.867264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:02.806 [2024-12-12 20:34:46.867271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:02.806 [2024-12-12 20:34:46.867278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:02.806 [2024-12-12 20:34:46.867285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:02.806 [2024-12-12 20:34:46.867292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:02.806 [2024-12-12 20:34:46.867299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:02.806 [2024-12-12 20:34:46.867396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:02.806 [2024-12-12 20:34:46.867404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:02.806 [2024-12-12 20:34:46.867411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:02.806 [2024-12-12 20:34:46.867434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:02.806 [2024-12-12 20:34:46.867442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:02.806 [2024-12-12 20:34:46.867450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:02.806 [2024-12-12 20:34:46.867458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:02.806 [2024-12-12 20:34:46.867465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:02.806 [2024-12-12 20:34:46.867472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:02.806 [2024-12-12 20:34:46.867480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:02.806 [2024-12-12 20:34:46.867487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:02.806 [2024-12-12 20:34:46.867494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:02.806 [2024-12-12 20:34:46.867501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:02.806 [2024-12-12 20:34:46.867508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:02.806 [2024-12-12 20:34:46.867516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:02.806 [2024-12-12 20:34:46.867523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:02.806 [2024-12-12 20:34:46.867530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:02.806 [2024-12-12 20:34:46.867537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:02.806 [2024-12-12 20:34:46.867562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:02.806 [2024-12-12 20:34:46.867569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:02.806 [2024-12-12 20:34:46.867577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:02.806 [2024-12-12 20:34:46.867583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:02.806 [2024-12-12 20:34:46.867591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:02.806 [2024-12-12 20:34:46.867598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:02.806 [2024-12-12 20:34:46.867605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:02.806 [2024-12-12 20:34:46.867612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:02.806 [2024-12-12 20:34:46.867619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:02.806 [2024-12-12 20:34:46.867627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:02.806 [2024-12-12 20:34:46.867634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:02.806 [2024-12-12 20:34:46.867641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:02.806 [2024-12-12 20:34:46.867649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:02.806 [2024-12-12 20:34:46.867656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:02.806 [2024-12-12 20:34:46.867664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:02.806 [2024-12-12 20:34:46.867671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:02.806 [2024-12-12 20:34:46.867678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:02.806 [2024-12-12 20:34:46.867686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:02.806 [2024-12-12 20:34:46.867694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:02.806 [2024-12-12 20:34:46.867701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:02.806 [2024-12-12 20:34:46.867708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:02.806 [2024-12-12 20:34:46.867720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:02.806 [2024-12-12 20:34:46.867728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:02.806 [2024-12-12 20:34:46.867735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:02.806 [2024-12-12 20:34:46.867742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:02.806 [2024-12-12 20:34:46.867749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:02.806 [2024-12-12 20:34:46.867756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:02.806 [2024-12-12 20:34:46.867762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:02.806 [2024-12-12 20:34:46.867769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:02.806 [2024-12-12 20:34:46.867776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:02.806 [2024-12-12 20:34:46.867784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:02.806 [2024-12-12 20:34:46.867791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:02.806 [2024-12-12 20:34:46.867798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:02.806 [2024-12-12 20:34:46.867804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:02.806 [2024-12-12 20:34:46.867811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:02.806 [2024-12-12 20:34:46.867819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:02.806 [2024-12-12 20:34:46.867826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:02.806 [2024-12-12 20:34:46.867833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:02.806 [2024-12-12 20:34:46.867840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:02.806 [2024-12-12 20:34:46.867847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:02.806 [2024-12-12 20:34:46.867854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:02.807 [2024-12-12 20:34:46.867861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:02.807 [2024-12-12 20:34:46.867867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:02.807 [2024-12-12 20:34:46.867874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:02.807 [2024-12-12 20:34:46.867881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:02.807 [2024-12-12 20:34:46.867888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:02.807 [2024-12-12 20:34:46.867895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:02.807 [2024-12-12 20:34:46.867902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:02.807 [2024-12-12 20:34:46.867909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:02.807 [2024-12-12 20:34:46.867916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:02.807 [2024-12-12 20:34:46.867924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:02.807 [2024-12-12 20:34:46.867931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:02.807 [2024-12-12 20:34:46.867938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:02.807 [2024-12-12 20:34:46.867945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:02.807 [2024-12-12 20:34:46.867952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:02.807 [2024-12-12 20:34:46.867959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:02.807 [2024-12-12 20:34:46.867975] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:02.807 [2024-12-12 20:34:46.867982] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9c27c25e-6895-4077-a8c7-dd2dac7fe71c 00:24:02.807 [2024-12-12 20:34:46.867989] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 512 00:24:02.807 [2024-12-12 20:34:46.867996] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 1472 00:24:02.807 [2024-12-12 20:34:46.868003] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 512 00:24:02.807 [2024-12-12 20:34:46.868010] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 2.8750 00:24:02.807 [2024-12-12 20:34:46.868023] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:02.807 [2024-12-12 20:34:46.868030] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:02.807 [2024-12-12 20:34:46.868038] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:02.807 [2024-12-12 20:34:46.868044] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:02.807 [2024-12-12 20:34:46.868049] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:02.807 [2024-12-12 20:34:46.868056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.807 [2024-12-12 20:34:46.868064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:02.807 [2024-12-12 20:34:46.868071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.960 ms 00:24:02.807 [2024-12-12 20:34:46.868080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.807 [2024-12-12 20:34:46.880103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.807 [2024-12-12 20:34:46.880130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:02.807 [2024-12-12 20:34:46.880140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.008 ms 00:24:02.807 [2024-12-12 20:34:46.880147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.807 [2024-12-12 20:34:46.880513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.807 [2024-12-12 20:34:46.880527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:02.807 [2024-12-12 20:34:46.880535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.338 ms 00:24:02.807 [2024-12-12 20:34:46.880542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.807 [2024-12-12 20:34:46.912998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:02.807 [2024-12-12 20:34:46.913027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:02.807 [2024-12-12 20:34:46.913037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:02.807 [2024-12-12 20:34:46.913045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.807 [2024-12-12 20:34:46.913093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:02.807 [2024-12-12 20:34:46.913103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:02.807 [2024-12-12 20:34:46.913110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:02.807 [2024-12-12 20:34:46.913117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.807 [2024-12-12 20:34:46.913170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:02.807 [2024-12-12 20:34:46.913179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:02.807 [2024-12-12 20:34:46.913186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:02.807 [2024-12-12 20:34:46.913193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.807 [2024-12-12 20:34:46.913207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:02.807 [2024-12-12 20:34:46.913214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:02.807 [2024-12-12 20:34:46.913224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:02.807 [2024-12-12 20:34:46.913231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.807 [2024-12-12 20:34:46.990047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:02.807 [2024-12-12 20:34:46.990082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:02.807 [2024-12-12 20:34:46.990092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:02.807 [2024-12-12 20:34:46.990100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.066 [2024-12-12 20:34:47.052626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:03.066 [2024-12-12 20:34:47.052667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:03.066 [2024-12-12 20:34:47.052677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:03.066 [2024-12-12 20:34:47.052685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.066 [2024-12-12 20:34:47.052748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:03.066 [2024-12-12 20:34:47.052756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:03.066 [2024-12-12 20:34:47.052764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:03.066 [2024-12-12 20:34:47.052771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.066 [2024-12-12 20:34:47.052804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:03.066 [2024-12-12 20:34:47.052812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:03.066 [2024-12-12 20:34:47.052820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:03.066 [2024-12-12 20:34:47.052830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.066 [2024-12-12 20:34:47.052912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:03.066 [2024-12-12 20:34:47.052921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:03.066 [2024-12-12 20:34:47.052929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:03.066 [2024-12-12 20:34:47.052936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.066 [2024-12-12 20:34:47.052964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:03.066 [2024-12-12 20:34:47.052973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:03.066 [2024-12-12 20:34:47.052980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:03.066 [2024-12-12 20:34:47.052987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.066 [2024-12-12 20:34:47.053024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:03.066 [2024-12-12 20:34:47.053033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:03.066 [2024-12-12 20:34:47.053040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:03.066 [2024-12-12 20:34:47.053048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.066 [2024-12-12 20:34:47.053084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:03.066 [2024-12-12 20:34:47.053093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:03.066 [2024-12-12 20:34:47.053100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:03.066 [2024-12-12 20:34:47.053110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.066 [2024-12-12 20:34:47.053214] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 341.152 ms, result 0 00:24:04.000 00:24:04.000 00:24:04.000 20:34:47 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:24:04.000 [2024-12-12 20:34:48.028993] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:24:04.000 [2024-12-12 20:34:48.029113] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80838 ] 00:24:04.000 [2024-12-12 20:34:48.188877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:04.313 [2024-12-12 20:34:48.283807] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:24:04.586 [2024-12-12 20:34:48.540442] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:04.586 [2024-12-12 20:34:48.540504] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:04.586 [2024-12-12 20:34:48.695792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.586 [2024-12-12 20:34:48.695841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:04.586 [2024-12-12 20:34:48.695854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:04.586 [2024-12-12 20:34:48.695862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.586 [2024-12-12 20:34:48.695908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.586 [2024-12-12 20:34:48.695920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:04.586 [2024-12-12 20:34:48.695928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:24:04.586 [2024-12-12 20:34:48.695935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.586 [2024-12-12 20:34:48.695950] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:04.586 [2024-12-12 20:34:48.696620] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:04.586 [2024-12-12 20:34:48.696637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.586 [2024-12-12 20:34:48.696644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:04.586 [2024-12-12 20:34:48.696653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.690 ms 00:24:04.586 [2024-12-12 20:34:48.696659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.586 [2024-12-12 20:34:48.697684] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:04.587 [2024-12-12 20:34:48.710558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.587 [2024-12-12 20:34:48.710590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:04.587 [2024-12-12 20:34:48.710602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.874 ms 00:24:04.587 [2024-12-12 20:34:48.710610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.587 [2024-12-12 20:34:48.710663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.587 [2024-12-12 20:34:48.710672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:04.587 [2024-12-12 20:34:48.710680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:24:04.587 [2024-12-12 20:34:48.710686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.587 [2024-12-12 20:34:48.715464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.587 [2024-12-12 20:34:48.715491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:04.587 [2024-12-12 20:34:48.715501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.729 ms 00:24:04.587 [2024-12-12 20:34:48.715511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.587 [2024-12-12 20:34:48.715576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.587 [2024-12-12 20:34:48.715584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:04.587 [2024-12-12 20:34:48.715592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:24:04.587 [2024-12-12 20:34:48.715599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.587 [2024-12-12 20:34:48.715651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.587 [2024-12-12 20:34:48.715661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:04.587 [2024-12-12 20:34:48.715668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:04.587 [2024-12-12 20:34:48.715675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.587 [2024-12-12 20:34:48.715699] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:04.587 [2024-12-12 20:34:48.718967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.587 [2024-12-12 20:34:48.718994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:04.587 [2024-12-12 20:34:48.719005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.274 ms 00:24:04.587 [2024-12-12 20:34:48.719012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.587 [2024-12-12 20:34:48.719040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.587 [2024-12-12 20:34:48.719048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:04.587 [2024-12-12 20:34:48.719056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:04.587 [2024-12-12 20:34:48.719063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.587 [2024-12-12 20:34:48.719081] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:04.587 [2024-12-12 20:34:48.719098] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:04.587 [2024-12-12 20:34:48.719132] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:04.587 [2024-12-12 20:34:48.719149] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:04.587 [2024-12-12 20:34:48.719250] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:04.587 [2024-12-12 20:34:48.719260] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:04.587 [2024-12-12 20:34:48.719270] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:04.587 [2024-12-12 20:34:48.719279] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:04.587 [2024-12-12 20:34:48.719288] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:04.587 [2024-12-12 20:34:48.719296] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:04.587 [2024-12-12 20:34:48.719303] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:04.587 [2024-12-12 20:34:48.719309] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:04.587 [2024-12-12 20:34:48.719319] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:04.587 [2024-12-12 20:34:48.719326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.587 [2024-12-12 20:34:48.719333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:04.587 [2024-12-12 20:34:48.719341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.247 ms 00:24:04.587 [2024-12-12 20:34:48.719348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.587 [2024-12-12 20:34:48.719448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.587 [2024-12-12 20:34:48.719457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:04.587 [2024-12-12 20:34:48.719464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:24:04.587 [2024-12-12 20:34:48.719471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.587 [2024-12-12 20:34:48.719568] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:04.587 [2024-12-12 20:34:48.719578] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:04.587 [2024-12-12 20:34:48.719586] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:04.587 [2024-12-12 20:34:48.719594] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:04.587 [2024-12-12 20:34:48.719601] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:04.587 [2024-12-12 20:34:48.719608] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:04.587 [2024-12-12 20:34:48.719614] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:04.587 [2024-12-12 20:34:48.719622] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:04.587 [2024-12-12 20:34:48.719628] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:04.587 [2024-12-12 20:34:48.719635] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:04.587 [2024-12-12 20:34:48.719641] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:04.587 [2024-12-12 20:34:48.719648] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:04.587 [2024-12-12 20:34:48.719655] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:04.587 [2024-12-12 20:34:48.719667] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:04.587 [2024-12-12 20:34:48.719675] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:04.587 [2024-12-12 20:34:48.719682] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:04.587 [2024-12-12 20:34:48.719689] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:04.587 [2024-12-12 20:34:48.719697] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:04.587 [2024-12-12 20:34:48.719704] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:04.587 [2024-12-12 20:34:48.719711] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:04.587 [2024-12-12 20:34:48.719717] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:04.587 [2024-12-12 20:34:48.719724] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:04.587 [2024-12-12 20:34:48.719731] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:04.587 [2024-12-12 20:34:48.719737] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:04.587 [2024-12-12 20:34:48.719744] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:04.587 [2024-12-12 20:34:48.719750] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:04.587 [2024-12-12 20:34:48.719756] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:04.587 [2024-12-12 20:34:48.719763] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:04.587 [2024-12-12 20:34:48.719769] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:04.587 [2024-12-12 20:34:48.719776] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:04.587 [2024-12-12 20:34:48.719782] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:04.587 [2024-12-12 20:34:48.719789] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:04.587 [2024-12-12 20:34:48.719795] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:04.587 [2024-12-12 20:34:48.719802] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:04.587 [2024-12-12 20:34:48.719808] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:04.587 [2024-12-12 20:34:48.719814] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:04.587 [2024-12-12 20:34:48.719821] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:04.587 [2024-12-12 20:34:48.719827] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:04.587 [2024-12-12 20:34:48.719833] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:04.587 [2024-12-12 20:34:48.719840] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:04.587 [2024-12-12 20:34:48.719846] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:04.587 [2024-12-12 20:34:48.719852] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:04.587 [2024-12-12 20:34:48.719859] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:04.587 [2024-12-12 20:34:48.719865] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:04.587 [2024-12-12 20:34:48.719873] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:04.587 [2024-12-12 20:34:48.719879] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:04.587 [2024-12-12 20:34:48.719887] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:04.587 [2024-12-12 20:34:48.719894] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:04.587 [2024-12-12 20:34:48.719902] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:04.588 [2024-12-12 20:34:48.719908] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:04.588 [2024-12-12 20:34:48.719915] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:04.588 [2024-12-12 20:34:48.719921] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:04.588 [2024-12-12 20:34:48.719927] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:04.588 [2024-12-12 20:34:48.719935] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:04.588 [2024-12-12 20:34:48.719944] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:04.588 [2024-12-12 20:34:48.719954] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:04.588 [2024-12-12 20:34:48.719962] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:04.588 [2024-12-12 20:34:48.719968] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:04.588 [2024-12-12 20:34:48.719975] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:04.588 [2024-12-12 20:34:48.719983] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:04.588 [2024-12-12 20:34:48.719989] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:04.588 [2024-12-12 20:34:48.719996] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:04.588 [2024-12-12 20:34:48.720003] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:04.588 [2024-12-12 20:34:48.720010] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:04.588 [2024-12-12 20:34:48.720017] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:04.588 [2024-12-12 20:34:48.720024] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:04.588 [2024-12-12 20:34:48.720030] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:04.588 [2024-12-12 20:34:48.720037] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:04.588 [2024-12-12 20:34:48.720044] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:04.588 [2024-12-12 20:34:48.720051] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:04.588 [2024-12-12 20:34:48.720060] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:04.588 [2024-12-12 20:34:48.720067] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:04.588 [2024-12-12 20:34:48.720074] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:04.588 [2024-12-12 20:34:48.720081] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:04.588 [2024-12-12 20:34:48.720088] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:04.588 [2024-12-12 20:34:48.720095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.588 [2024-12-12 20:34:48.720103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:04.588 [2024-12-12 20:34:48.720110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.595 ms 00:24:04.588 [2024-12-12 20:34:48.720118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.588 [2024-12-12 20:34:48.745520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.588 [2024-12-12 20:34:48.745553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:04.588 [2024-12-12 20:34:48.745563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.352 ms 00:24:04.588 [2024-12-12 20:34:48.745573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.588 [2024-12-12 20:34:48.745653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.588 [2024-12-12 20:34:48.745661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:04.588 [2024-12-12 20:34:48.745668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:24:04.588 [2024-12-12 20:34:48.745676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.588 [2024-12-12 20:34:48.788238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.588 [2024-12-12 20:34:48.788379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:04.588 [2024-12-12 20:34:48.788396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.514 ms 00:24:04.588 [2024-12-12 20:34:48.788404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.588 [2024-12-12 20:34:48.788460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.588 [2024-12-12 20:34:48.788470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:04.588 [2024-12-12 20:34:48.788483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:04.588 [2024-12-12 20:34:48.788490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.588 [2024-12-12 20:34:48.788844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.588 [2024-12-12 20:34:48.788860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:04.588 [2024-12-12 20:34:48.788869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.293 ms 00:24:04.588 [2024-12-12 20:34:48.788876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.588 [2024-12-12 20:34:48.788999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.588 [2024-12-12 20:34:48.789008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:04.588 [2024-12-12 20:34:48.789018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:24:04.588 [2024-12-12 20:34:48.789026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.588 [2024-12-12 20:34:48.801885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.588 [2024-12-12 20:34:48.801917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:04.588 [2024-12-12 20:34:48.801927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.840 ms 00:24:04.588 [2024-12-12 20:34:48.801934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.847 [2024-12-12 20:34:48.814312] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 3, empty chunks = 1 00:24:04.847 [2024-12-12 20:34:48.814345] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:04.847 [2024-12-12 20:34:48.814356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.847 [2024-12-12 20:34:48.814363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:04.847 [2024-12-12 20:34:48.814371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.335 ms 00:24:04.847 [2024-12-12 20:34:48.814378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.847 [2024-12-12 20:34:48.838236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.847 [2024-12-12 20:34:48.838360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:04.847 [2024-12-12 20:34:48.838376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.824 ms 00:24:04.847 [2024-12-12 20:34:48.838384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.847 [2024-12-12 20:34:48.850066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.847 [2024-12-12 20:34:48.850096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:04.847 [2024-12-12 20:34:48.850105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.636 ms 00:24:04.847 [2024-12-12 20:34:48.850112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.847 [2024-12-12 20:34:48.861319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.847 [2024-12-12 20:34:48.861442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:04.847 [2024-12-12 20:34:48.861456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.176 ms 00:24:04.847 [2024-12-12 20:34:48.861463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.847 [2024-12-12 20:34:48.862055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.847 [2024-12-12 20:34:48.862074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:04.847 [2024-12-12 20:34:48.862085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.514 ms 00:24:04.847 [2024-12-12 20:34:48.862093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.847 [2024-12-12 20:34:48.916687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.847 [2024-12-12 20:34:48.916882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:04.847 [2024-12-12 20:34:48.916907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.576 ms 00:24:04.847 [2024-12-12 20:34:48.916915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.847 [2024-12-12 20:34:48.927634] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:04.847 [2024-12-12 20:34:48.929904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.847 [2024-12-12 20:34:48.929934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:04.847 [2024-12-12 20:34:48.929947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.677 ms 00:24:04.847 [2024-12-12 20:34:48.929956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.847 [2024-12-12 20:34:48.930058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.847 [2024-12-12 20:34:48.930071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:04.847 [2024-12-12 20:34:48.930081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:04.847 [2024-12-12 20:34:48.930092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.847 [2024-12-12 20:34:48.930672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.847 [2024-12-12 20:34:48.930690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:04.847 [2024-12-12 20:34:48.930698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.541 ms 00:24:04.847 [2024-12-12 20:34:48.930706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.847 [2024-12-12 20:34:48.930727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.847 [2024-12-12 20:34:48.930735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:04.847 [2024-12-12 20:34:48.930743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:04.847 [2024-12-12 20:34:48.930750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.847 [2024-12-12 20:34:48.930784] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:04.847 [2024-12-12 20:34:48.930794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.847 [2024-12-12 20:34:48.930802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:04.847 [2024-12-12 20:34:48.930809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:04.847 [2024-12-12 20:34:48.930817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.847 [2024-12-12 20:34:48.954062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.847 [2024-12-12 20:34:48.954175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:04.847 [2024-12-12 20:34:48.954231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.228 ms 00:24:04.847 [2024-12-12 20:34:48.954254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.847 [2024-12-12 20:34:48.954323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.847 [2024-12-12 20:34:48.954348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:04.848 [2024-12-12 20:34:48.954367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:24:04.848 [2024-12-12 20:34:48.954385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.848 [2024-12-12 20:34:48.955451] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 259.241 ms, result 0 00:24:06.221  [2024-12-12T20:34:51.383Z] Copying: 996/1048576 [kB] (996 kBps) [2024-12-12T20:34:52.316Z] Copying: 11/1024 [MB] (10 MBps) [2024-12-12T20:34:53.249Z] Copying: 26/1024 [MB] (15 MBps) [2024-12-12T20:34:54.184Z] Copying: 46/1024 [MB] (20 MBps) [2024-12-12T20:34:55.556Z] Copying: 71/1024 [MB] (24 MBps) [2024-12-12T20:34:56.489Z] Copying: 88/1024 [MB] (17 MBps) [2024-12-12T20:34:57.423Z] Copying: 100/1024 [MB] (11 MBps) [2024-12-12T20:34:58.358Z] Copying: 112/1024 [MB] (11 MBps) [2024-12-12T20:34:59.295Z] Copying: 124/1024 [MB] (11 MBps) [2024-12-12T20:35:00.229Z] Copying: 136/1024 [MB] (12 MBps) [2024-12-12T20:35:01.163Z] Copying: 151/1024 [MB] (15 MBps) [2024-12-12T20:35:02.540Z] Copying: 163/1024 [MB] (12 MBps) [2024-12-12T20:35:03.531Z] Copying: 181/1024 [MB] (17 MBps) [2024-12-12T20:35:04.466Z] Copying: 203/1024 [MB] (21 MBps) [2024-12-12T20:35:05.399Z] Copying: 222/1024 [MB] (19 MBps) [2024-12-12T20:35:06.331Z] Copying: 246/1024 [MB] (23 MBps) [2024-12-12T20:35:07.266Z] Copying: 279/1024 [MB] (33 MBps) [2024-12-12T20:35:08.200Z] Copying: 301/1024 [MB] (22 MBps) [2024-12-12T20:35:09.571Z] Copying: 318/1024 [MB] (16 MBps) [2024-12-12T20:35:10.505Z] Copying: 330/1024 [MB] (12 MBps) [2024-12-12T20:35:11.439Z] Copying: 342/1024 [MB] (12 MBps) [2024-12-12T20:35:12.373Z] Copying: 361/1024 [MB] (19 MBps) [2024-12-12T20:35:13.305Z] Copying: 384/1024 [MB] (22 MBps) [2024-12-12T20:35:14.237Z] Copying: 397/1024 [MB] (12 MBps) [2024-12-12T20:35:15.171Z] Copying: 413/1024 [MB] (16 MBps) [2024-12-12T20:35:16.548Z] Copying: 433/1024 [MB] (20 MBps) [2024-12-12T20:35:17.482Z] Copying: 448/1024 [MB] (14 MBps) [2024-12-12T20:35:18.422Z] Copying: 469/1024 [MB] (20 MBps) [2024-12-12T20:35:19.366Z] Copying: 493/1024 [MB] (24 MBps) [2024-12-12T20:35:20.300Z] Copying: 520/1024 [MB] (27 MBps) [2024-12-12T20:35:21.233Z] Copying: 548/1024 [MB] (27 MBps) [2024-12-12T20:35:22.166Z] Copying: 568/1024 [MB] (20 MBps) [2024-12-12T20:35:23.539Z] Copying: 585/1024 [MB] (16 MBps) [2024-12-12T20:35:24.474Z] Copying: 598/1024 [MB] (13 MBps) [2024-12-12T20:35:25.408Z] Copying: 611/1024 [MB] (13 MBps) [2024-12-12T20:35:26.341Z] Copying: 624/1024 [MB] (12 MBps) [2024-12-12T20:35:27.274Z] Copying: 637/1024 [MB] (13 MBps) [2024-12-12T20:35:28.206Z] Copying: 650/1024 [MB] (13 MBps) [2024-12-12T20:35:29.579Z] Copying: 663/1024 [MB] (12 MBps) [2024-12-12T20:35:30.145Z] Copying: 676/1024 [MB] (12 MBps) [2024-12-12T20:35:31.519Z] Copying: 688/1024 [MB] (12 MBps) [2024-12-12T20:35:32.451Z] Copying: 700/1024 [MB] (11 MBps) [2024-12-12T20:35:33.385Z] Copying: 716/1024 [MB] (15 MBps) [2024-12-12T20:35:34.321Z] Copying: 733/1024 [MB] (17 MBps) [2024-12-12T20:35:35.254Z] Copying: 752/1024 [MB] (19 MBps) [2024-12-12T20:35:36.186Z] Copying: 775/1024 [MB] (22 MBps) [2024-12-12T20:35:37.558Z] Copying: 802/1024 [MB] (27 MBps) [2024-12-12T20:35:38.492Z] Copying: 826/1024 [MB] (24 MBps) [2024-12-12T20:35:39.425Z] Copying: 851/1024 [MB] (24 MBps) [2024-12-12T20:35:40.359Z] Copying: 877/1024 [MB] (26 MBps) [2024-12-12T20:35:41.293Z] Copying: 896/1024 [MB] (19 MBps) [2024-12-12T20:35:42.228Z] Copying: 915/1024 [MB] (18 MBps) [2024-12-12T20:35:43.162Z] Copying: 942/1024 [MB] (26 MBps) [2024-12-12T20:35:44.536Z] Copying: 962/1024 [MB] (20 MBps) [2024-12-12T20:35:45.470Z] Copying: 987/1024 [MB] (24 MBps) [2024-12-12T20:35:45.470Z] Copying: 1015/1024 [MB] (28 MBps) [2024-12-12T20:35:46.036Z] Copying: 1024/1024 [MB] (average 18 MBps)[2024-12-12 20:35:45.731359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.808 [2024-12-12 20:35:45.731435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:01.808 [2024-12-12 20:35:45.731449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:01.808 [2024-12-12 20:35:45.731462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.808 [2024-12-12 20:35:45.731484] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:01.808 [2024-12-12 20:35:45.735200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.808 [2024-12-12 20:35:45.735231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:01.808 [2024-12-12 20:35:45.735241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.700 ms 00:25:01.808 [2024-12-12 20:35:45.735249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.808 [2024-12-12 20:35:45.735563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.808 [2024-12-12 20:35:45.735574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:01.808 [2024-12-12 20:35:45.735582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.289 ms 00:25:01.808 [2024-12-12 20:35:45.735595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.808 [2024-12-12 20:35:45.748833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.808 [2024-12-12 20:35:45.748870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:01.808 [2024-12-12 20:35:45.748881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.223 ms 00:25:01.808 [2024-12-12 20:35:45.748888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.808 [2024-12-12 20:35:45.755000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.808 [2024-12-12 20:35:45.755131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:01.808 [2024-12-12 20:35:45.755147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.087 ms 00:25:01.808 [2024-12-12 20:35:45.755161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.808 [2024-12-12 20:35:45.779319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.808 [2024-12-12 20:35:45.779351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:01.808 [2024-12-12 20:35:45.779362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.118 ms 00:25:01.808 [2024-12-12 20:35:45.779369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.808 [2024-12-12 20:35:45.793370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.808 [2024-12-12 20:35:45.793429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:01.808 [2024-12-12 20:35:45.793441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.968 ms 00:25:01.808 [2024-12-12 20:35:45.793448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.112 [2024-12-12 20:35:46.043177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.112 [2024-12-12 20:35:46.043218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:02.112 [2024-12-12 20:35:46.043228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 249.693 ms 00:25:02.112 [2024-12-12 20:35:46.043236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.112 [2024-12-12 20:35:46.066468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.112 [2024-12-12 20:35:46.066497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:02.112 [2024-12-12 20:35:46.066507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.218 ms 00:25:02.112 [2024-12-12 20:35:46.066514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.112 [2024-12-12 20:35:46.089594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.112 [2024-12-12 20:35:46.089623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:02.112 [2024-12-12 20:35:46.089633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.049 ms 00:25:02.112 [2024-12-12 20:35:46.089639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.112 [2024-12-12 20:35:46.112761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.112 [2024-12-12 20:35:46.112888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:02.112 [2024-12-12 20:35:46.112903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.092 ms 00:25:02.112 [2024-12-12 20:35:46.112910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.112 [2024-12-12 20:35:46.135905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.112 [2024-12-12 20:35:46.136020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:02.112 [2024-12-12 20:35:46.136035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.946 ms 00:25:02.112 [2024-12-12 20:35:46.136042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.112 [2024-12-12 20:35:46.136069] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:02.112 [2024-12-12 20:35:46.136082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131840 / 261120 wr_cnt: 1 state: open 00:25:02.112 [2024-12-12 20:35:46.136092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:02.112 [2024-12-12 20:35:46.136100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:02.112 [2024-12-12 20:35:46.136108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:02.112 [2024-12-12 20:35:46.136115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:02.112 [2024-12-12 20:35:46.136123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:02.112 [2024-12-12 20:35:46.136130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:02.112 [2024-12-12 20:35:46.136138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:02.112 [2024-12-12 20:35:46.136145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:02.112 [2024-12-12 20:35:46.136153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:02.112 [2024-12-12 20:35:46.136161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:02.112 [2024-12-12 20:35:46.136168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:02.112 [2024-12-12 20:35:46.136175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:02.112 [2024-12-12 20:35:46.136183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:02.112 [2024-12-12 20:35:46.136190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:02.112 [2024-12-12 20:35:46.136197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:02.112 [2024-12-12 20:35:46.136204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:02.112 [2024-12-12 20:35:46.136211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:02.112 [2024-12-12 20:35:46.136218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:02.112 [2024-12-12 20:35:46.136225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:02.112 [2024-12-12 20:35:46.136232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:02.112 [2024-12-12 20:35:46.136239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:02.112 [2024-12-12 20:35:46.136246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:02.112 [2024-12-12 20:35:46.136254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:02.112 [2024-12-12 20:35:46.136261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:02.112 [2024-12-12 20:35:46.136268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:02.112 [2024-12-12 20:35:46.136275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:02.112 [2024-12-12 20:35:46.136283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:02.112 [2024-12-12 20:35:46.136291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:02.112 [2024-12-12 20:35:46.136299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:02.112 [2024-12-12 20:35:46.136306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:02.112 [2024-12-12 20:35:46.136314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:02.112 [2024-12-12 20:35:46.136321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:02.112 [2024-12-12 20:35:46.136328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:02.112 [2024-12-12 20:35:46.136335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:02.112 [2024-12-12 20:35:46.136342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:02.112 [2024-12-12 20:35:46.136349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:02.112 [2024-12-12 20:35:46.136357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:02.112 [2024-12-12 20:35:46.136364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:02.112 [2024-12-12 20:35:46.136371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:02.112 [2024-12-12 20:35:46.136378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:02.112 [2024-12-12 20:35:46.136385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:02.112 [2024-12-12 20:35:46.136393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:02.112 [2024-12-12 20:35:46.136400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:02.112 [2024-12-12 20:35:46.136407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:02.112 [2024-12-12 20:35:46.136436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:02.112 [2024-12-12 20:35:46.136444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:02.112 [2024-12-12 20:35:46.136452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:02.112 [2024-12-12 20:35:46.136459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:02.112 [2024-12-12 20:35:46.136466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:02.112 [2024-12-12 20:35:46.136473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:02.112 [2024-12-12 20:35:46.136481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:02.112 [2024-12-12 20:35:46.136488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:02.112 [2024-12-12 20:35:46.136495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:02.112 [2024-12-12 20:35:46.136502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:02.112 [2024-12-12 20:35:46.136510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:02.112 [2024-12-12 20:35:46.136517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:02.113 [2024-12-12 20:35:46.136525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:02.113 [2024-12-12 20:35:46.136532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:02.113 [2024-12-12 20:35:46.136540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:02.113 [2024-12-12 20:35:46.136548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:02.113 [2024-12-12 20:35:46.136557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:02.113 [2024-12-12 20:35:46.136564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:02.113 [2024-12-12 20:35:46.136572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:02.113 [2024-12-12 20:35:46.136579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:02.113 [2024-12-12 20:35:46.136586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:02.113 [2024-12-12 20:35:46.136593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:02.113 [2024-12-12 20:35:46.136600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:02.113 [2024-12-12 20:35:46.136608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:02.113 [2024-12-12 20:35:46.136615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:02.113 [2024-12-12 20:35:46.136622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:02.113 [2024-12-12 20:35:46.136629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:02.113 [2024-12-12 20:35:46.136636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:02.113 [2024-12-12 20:35:46.136643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:02.113 [2024-12-12 20:35:46.136650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:02.113 [2024-12-12 20:35:46.136658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:02.113 [2024-12-12 20:35:46.136665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:02.113 [2024-12-12 20:35:46.136671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:02.113 [2024-12-12 20:35:46.136678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:02.113 [2024-12-12 20:35:46.136685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:02.113 [2024-12-12 20:35:46.136693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:02.113 [2024-12-12 20:35:46.136700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:02.113 [2024-12-12 20:35:46.136707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:02.113 [2024-12-12 20:35:46.136715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:02.113 [2024-12-12 20:35:46.136728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:02.113 [2024-12-12 20:35:46.136735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:02.113 [2024-12-12 20:35:46.136742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:02.113 [2024-12-12 20:35:46.136749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:02.113 [2024-12-12 20:35:46.136756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:02.113 [2024-12-12 20:35:46.136763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:02.113 [2024-12-12 20:35:46.136770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:02.113 [2024-12-12 20:35:46.136777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:02.113 [2024-12-12 20:35:46.136785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:02.113 [2024-12-12 20:35:46.136792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:02.113 [2024-12-12 20:35:46.136799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:02.113 [2024-12-12 20:35:46.136807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:02.113 [2024-12-12 20:35:46.136814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:02.113 [2024-12-12 20:35:46.136821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:02.113 [2024-12-12 20:35:46.136828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:02.113 [2024-12-12 20:35:46.136835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:02.113 [2024-12-12 20:35:46.136850] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:02.113 [2024-12-12 20:35:46.136858] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9c27c25e-6895-4077-a8c7-dd2dac7fe71c 00:25:02.113 [2024-12-12 20:35:46.136866] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131840 00:25:02.113 [2024-12-12 20:35:46.136873] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 132288 00:25:02.113 [2024-12-12 20:35:46.136879] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 131328 00:25:02.113 [2024-12-12 20:35:46.136887] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0073 00:25:02.113 [2024-12-12 20:35:46.136897] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:02.113 [2024-12-12 20:35:46.136910] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:02.113 [2024-12-12 20:35:46.136917] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:02.113 [2024-12-12 20:35:46.136923] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:02.113 [2024-12-12 20:35:46.136930] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:02.113 [2024-12-12 20:35:46.136937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.113 [2024-12-12 20:35:46.136944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:02.113 [2024-12-12 20:35:46.136951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.868 ms 00:25:02.113 [2024-12-12 20:35:46.136958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.113 [2024-12-12 20:35:46.149427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.113 [2024-12-12 20:35:46.149454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:02.113 [2024-12-12 20:35:46.149468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.442 ms 00:25:02.113 [2024-12-12 20:35:46.149475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.113 [2024-12-12 20:35:46.149819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.113 [2024-12-12 20:35:46.149828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:02.113 [2024-12-12 20:35:46.149836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.316 ms 00:25:02.113 [2024-12-12 20:35:46.149843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.113 [2024-12-12 20:35:46.182601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:02.113 [2024-12-12 20:35:46.182728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:02.113 [2024-12-12 20:35:46.182743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:02.113 [2024-12-12 20:35:46.182751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.113 [2024-12-12 20:35:46.182800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:02.113 [2024-12-12 20:35:46.182808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:02.113 [2024-12-12 20:35:46.182815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:02.113 [2024-12-12 20:35:46.182822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.113 [2024-12-12 20:35:46.182873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:02.113 [2024-12-12 20:35:46.182883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:02.113 [2024-12-12 20:35:46.182894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:02.113 [2024-12-12 20:35:46.182901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.113 [2024-12-12 20:35:46.182915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:02.113 [2024-12-12 20:35:46.182922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:02.113 [2024-12-12 20:35:46.182929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:02.113 [2024-12-12 20:35:46.182936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.113 [2024-12-12 20:35:46.260332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:02.113 [2024-12-12 20:35:46.260372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:02.113 [2024-12-12 20:35:46.260383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:02.113 [2024-12-12 20:35:46.260390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.113 [2024-12-12 20:35:46.323929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:02.113 [2024-12-12 20:35:46.323963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:02.113 [2024-12-12 20:35:46.323973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:02.113 [2024-12-12 20:35:46.323981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.113 [2024-12-12 20:35:46.324048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:02.113 [2024-12-12 20:35:46.324058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:02.113 [2024-12-12 20:35:46.324065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:02.113 [2024-12-12 20:35:46.324075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.113 [2024-12-12 20:35:46.324108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:02.113 [2024-12-12 20:35:46.324118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:02.113 [2024-12-12 20:35:46.324125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:02.113 [2024-12-12 20:35:46.324132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.113 [2024-12-12 20:35:46.324216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:02.113 [2024-12-12 20:35:46.324226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:02.113 [2024-12-12 20:35:46.324233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:02.113 [2024-12-12 20:35:46.324241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.113 [2024-12-12 20:35:46.324270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:02.114 [2024-12-12 20:35:46.324278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:02.114 [2024-12-12 20:35:46.324286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:02.114 [2024-12-12 20:35:46.324293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.114 [2024-12-12 20:35:46.324325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:02.114 [2024-12-12 20:35:46.324333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:02.114 [2024-12-12 20:35:46.324340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:02.114 [2024-12-12 20:35:46.324347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.114 [2024-12-12 20:35:46.324385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:02.114 [2024-12-12 20:35:46.324395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:02.114 [2024-12-12 20:35:46.324403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:02.114 [2024-12-12 20:35:46.324410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.114 [2024-12-12 20:35:46.324531] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 593.153 ms, result 0 00:25:03.049 00:25:03.049 00:25:03.049 20:35:47 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:04.949 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:25:04.949 20:35:49 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:25:04.949 20:35:49 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:25:04.949 20:35:49 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:25:05.206 20:35:49 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:05.206 20:35:49 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:05.206 20:35:49 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 78736 00:25:05.206 20:35:49 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 78736 ']' 00:25:05.206 20:35:49 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 78736 00:25:05.206 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (78736) - No such process 00:25:05.206 Process with pid 78736 is not found 00:25:05.206 Remove shared memory files 00:25:05.206 20:35:49 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 78736 is not found' 00:25:05.206 20:35:49 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:25:05.206 20:35:49 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:25:05.206 20:35:49 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:25:05.206 20:35:49 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:25:05.206 20:35:49 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:25:05.206 20:35:49 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:25:05.206 20:35:49 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:25:05.206 ************************************ 00:25:05.206 END TEST ftl_restore 00:25:05.206 ************************************ 00:25:05.206 00:25:05.206 real 4m22.892s 00:25:05.206 user 4m12.515s 00:25:05.206 sys 0m11.021s 00:25:05.206 20:35:49 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:05.206 20:35:49 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:25:05.206 20:35:49 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:25:05.206 20:35:49 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:25:05.206 20:35:49 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:05.206 20:35:49 ftl -- common/autotest_common.sh@10 -- # set +x 00:25:05.206 ************************************ 00:25:05.206 START TEST ftl_dirty_shutdown 00:25:05.206 ************************************ 00:25:05.206 20:35:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:25:05.206 * Looking for test storage... 00:25:05.206 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:25:05.206 20:35:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:05.206 20:35:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:25:05.206 20:35:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:05.464 20:35:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:05.464 20:35:49 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:05.464 20:35:49 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:05.464 20:35:49 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:05.464 20:35:49 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:25:05.464 20:35:49 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:25:05.464 20:35:49 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:25:05.464 20:35:49 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:25:05.464 20:35:49 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:25:05.464 20:35:49 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:25:05.464 20:35:49 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:25:05.464 20:35:49 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:05.464 20:35:49 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:25:05.464 20:35:49 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:25:05.464 20:35:49 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:05.464 20:35:49 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:05.464 20:35:49 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:25:05.464 20:35:49 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:25:05.464 20:35:49 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:05.464 20:35:49 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:25:05.464 20:35:49 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:25:05.464 20:35:49 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:25:05.464 20:35:49 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:25:05.464 20:35:49 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:05.464 20:35:49 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:25:05.464 20:35:49 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:25:05.464 20:35:49 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:05.464 20:35:49 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:05.464 20:35:49 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:25:05.464 20:35:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:05.465 20:35:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:05.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:05.465 --rc genhtml_branch_coverage=1 00:25:05.465 --rc genhtml_function_coverage=1 00:25:05.465 --rc genhtml_legend=1 00:25:05.465 --rc geninfo_all_blocks=1 00:25:05.465 --rc geninfo_unexecuted_blocks=1 00:25:05.465 00:25:05.465 ' 00:25:05.465 20:35:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:05.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:05.465 --rc genhtml_branch_coverage=1 00:25:05.465 --rc genhtml_function_coverage=1 00:25:05.465 --rc genhtml_legend=1 00:25:05.465 --rc geninfo_all_blocks=1 00:25:05.465 --rc geninfo_unexecuted_blocks=1 00:25:05.465 00:25:05.465 ' 00:25:05.465 20:35:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:05.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:05.465 --rc genhtml_branch_coverage=1 00:25:05.465 --rc genhtml_function_coverage=1 00:25:05.465 --rc genhtml_legend=1 00:25:05.465 --rc geninfo_all_blocks=1 00:25:05.465 --rc geninfo_unexecuted_blocks=1 00:25:05.465 00:25:05.465 ' 00:25:05.465 20:35:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:05.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:05.465 --rc genhtml_branch_coverage=1 00:25:05.465 --rc genhtml_function_coverage=1 00:25:05.465 --rc genhtml_legend=1 00:25:05.465 --rc geninfo_all_blocks=1 00:25:05.465 --rc geninfo_unexecuted_blocks=1 00:25:05.465 00:25:05.465 ' 00:25:05.465 20:35:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:25:05.465 20:35:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:25:05.465 20:35:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:25:05.465 20:35:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:25:05.465 20:35:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:25:05.465 20:35:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:25:05.465 20:35:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:05.465 20:35:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:25:05.465 20:35:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:25:05.465 20:35:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:05.465 20:35:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:05.465 20:35:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:25:05.465 20:35:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:25:05.465 20:35:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:05.465 20:35:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:05.465 20:35:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:25:05.465 20:35:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:25:05.465 20:35:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:05.465 20:35:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:05.465 20:35:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:25:05.465 20:35:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:25:05.465 20:35:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:05.465 20:35:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:05.465 20:35:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:05.465 20:35:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:05.465 20:35:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:25:05.465 20:35:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:25:05.465 20:35:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:05.465 20:35:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:05.465 20:35:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:05.465 20:35:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:05.465 20:35:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:25:05.465 20:35:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:25:05.465 20:35:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:25:05.465 20:35:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:25:05.465 20:35:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:25:05.465 20:35:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:25:05.465 20:35:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:25:05.465 20:35:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:25:05.465 20:35:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:25:05.465 20:35:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:25:05.465 20:35:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:25:05.465 20:35:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=81530 00:25:05.465 20:35:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 81530 00:25:05.465 20:35:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 81530 ']' 00:25:05.465 20:35:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:05.465 20:35:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:05.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:05.465 20:35:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:05.465 20:35:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:25:05.465 20:35:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:05.465 20:35:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:05.465 [2024-12-12 20:35:49.599952] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:25:05.465 [2024-12-12 20:35:49.600071] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81530 ] 00:25:05.723 [2024-12-12 20:35:49.754078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:05.723 [2024-12-12 20:35:49.852585] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:25:06.288 20:35:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:06.288 20:35:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:25:06.288 20:35:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:25:06.288 20:35:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:25:06.288 20:35:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:25:06.288 20:35:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:25:06.288 20:35:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:25:06.288 20:35:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:25:06.547 20:35:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:25:06.547 20:35:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:25:06.547 20:35:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:25:06.547 20:35:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:25:06.547 20:35:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:06.547 20:35:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:25:06.547 20:35:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:25:06.547 20:35:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:25:06.806 20:35:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:06.806 { 00:25:06.806 "name": "nvme0n1", 00:25:06.806 "aliases": [ 00:25:06.806 "3fa12b9b-2bf6-4f46-9d33-259bb47840cd" 00:25:06.806 ], 00:25:06.806 "product_name": "NVMe disk", 00:25:06.806 "block_size": 4096, 00:25:06.806 "num_blocks": 1310720, 00:25:06.806 "uuid": "3fa12b9b-2bf6-4f46-9d33-259bb47840cd", 00:25:06.806 "numa_id": -1, 00:25:06.806 "assigned_rate_limits": { 00:25:06.806 "rw_ios_per_sec": 0, 00:25:06.806 "rw_mbytes_per_sec": 0, 00:25:06.806 "r_mbytes_per_sec": 0, 00:25:06.806 "w_mbytes_per_sec": 0 00:25:06.806 }, 00:25:06.806 "claimed": true, 00:25:06.806 "claim_type": "read_many_write_one", 00:25:06.806 "zoned": false, 00:25:06.806 "supported_io_types": { 00:25:06.806 "read": true, 00:25:06.806 "write": true, 00:25:06.806 "unmap": true, 00:25:06.806 "flush": true, 00:25:06.806 "reset": true, 00:25:06.806 "nvme_admin": true, 00:25:06.806 "nvme_io": true, 00:25:06.806 "nvme_io_md": false, 00:25:06.806 "write_zeroes": true, 00:25:06.806 "zcopy": false, 00:25:06.806 "get_zone_info": false, 00:25:06.806 "zone_management": false, 00:25:06.806 "zone_append": false, 00:25:06.806 "compare": true, 00:25:06.806 "compare_and_write": false, 00:25:06.806 "abort": true, 00:25:06.806 "seek_hole": false, 00:25:06.806 "seek_data": false, 00:25:06.806 "copy": true, 00:25:06.806 "nvme_iov_md": false 00:25:06.806 }, 00:25:06.806 "driver_specific": { 00:25:06.806 "nvme": [ 00:25:06.806 { 00:25:06.806 "pci_address": "0000:00:11.0", 00:25:06.806 "trid": { 00:25:06.806 "trtype": "PCIe", 00:25:06.806 "traddr": "0000:00:11.0" 00:25:06.806 }, 00:25:06.806 "ctrlr_data": { 00:25:06.806 "cntlid": 0, 00:25:06.806 "vendor_id": "0x1b36", 00:25:06.806 "model_number": "QEMU NVMe Ctrl", 00:25:06.806 "serial_number": "12341", 00:25:06.806 "firmware_revision": "8.0.0", 00:25:06.806 "subnqn": "nqn.2019-08.org.qemu:12341", 00:25:06.806 "oacs": { 00:25:06.806 "security": 0, 00:25:06.806 "format": 1, 00:25:06.806 "firmware": 0, 00:25:06.806 "ns_manage": 1 00:25:06.806 }, 00:25:06.806 "multi_ctrlr": false, 00:25:06.806 "ana_reporting": false 00:25:06.806 }, 00:25:06.806 "vs": { 00:25:06.806 "nvme_version": "1.4" 00:25:06.806 }, 00:25:06.806 "ns_data": { 00:25:06.806 "id": 1, 00:25:06.806 "can_share": false 00:25:06.806 } 00:25:06.806 } 00:25:06.806 ], 00:25:06.806 "mp_policy": "active_passive" 00:25:06.806 } 00:25:06.806 } 00:25:06.806 ]' 00:25:06.806 20:35:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:06.806 20:35:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:25:06.806 20:35:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:06.806 20:35:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:25:06.806 20:35:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:25:06.806 20:35:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:25:06.806 20:35:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:25:06.806 20:35:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:25:06.806 20:35:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:25:06.806 20:35:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:25:06.806 20:35:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:07.064 20:35:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=d195da4a-5e60-4998-8a52-3b3039688ba5 00:25:07.064 20:35:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:25:07.064 20:35:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d195da4a-5e60-4998-8a52-3b3039688ba5 00:25:07.321 20:35:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:25:07.579 20:35:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=929c7234-5cf5-4486-930a-a1fd5746c450 00:25:07.579 20:35:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 929c7234-5cf5-4486-930a-a1fd5746c450 00:25:07.837 20:35:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=3d353dd7-70ef-43bf-b6a5-cb215a2e6120 00:25:07.837 20:35:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:25:07.837 20:35:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 3d353dd7-70ef-43bf-b6a5-cb215a2e6120 00:25:07.837 20:35:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:25:07.837 20:35:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:25:07.837 20:35:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=3d353dd7-70ef-43bf-b6a5-cb215a2e6120 00:25:07.837 20:35:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:25:07.837 20:35:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 3d353dd7-70ef-43bf-b6a5-cb215a2e6120 00:25:07.837 20:35:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=3d353dd7-70ef-43bf-b6a5-cb215a2e6120 00:25:07.837 20:35:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:07.837 20:35:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:25:07.837 20:35:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:25:07.837 20:35:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3d353dd7-70ef-43bf-b6a5-cb215a2e6120 00:25:08.095 20:35:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:08.095 { 00:25:08.095 "name": "3d353dd7-70ef-43bf-b6a5-cb215a2e6120", 00:25:08.095 "aliases": [ 00:25:08.095 "lvs/nvme0n1p0" 00:25:08.095 ], 00:25:08.095 "product_name": "Logical Volume", 00:25:08.095 "block_size": 4096, 00:25:08.095 "num_blocks": 26476544, 00:25:08.095 "uuid": "3d353dd7-70ef-43bf-b6a5-cb215a2e6120", 00:25:08.095 "assigned_rate_limits": { 00:25:08.095 "rw_ios_per_sec": 0, 00:25:08.095 "rw_mbytes_per_sec": 0, 00:25:08.095 "r_mbytes_per_sec": 0, 00:25:08.095 "w_mbytes_per_sec": 0 00:25:08.095 }, 00:25:08.095 "claimed": false, 00:25:08.095 "zoned": false, 00:25:08.095 "supported_io_types": { 00:25:08.095 "read": true, 00:25:08.095 "write": true, 00:25:08.095 "unmap": true, 00:25:08.095 "flush": false, 00:25:08.095 "reset": true, 00:25:08.095 "nvme_admin": false, 00:25:08.095 "nvme_io": false, 00:25:08.095 "nvme_io_md": false, 00:25:08.095 "write_zeroes": true, 00:25:08.096 "zcopy": false, 00:25:08.096 "get_zone_info": false, 00:25:08.096 "zone_management": false, 00:25:08.096 "zone_append": false, 00:25:08.096 "compare": false, 00:25:08.096 "compare_and_write": false, 00:25:08.096 "abort": false, 00:25:08.096 "seek_hole": true, 00:25:08.096 "seek_data": true, 00:25:08.096 "copy": false, 00:25:08.096 "nvme_iov_md": false 00:25:08.096 }, 00:25:08.096 "driver_specific": { 00:25:08.096 "lvol": { 00:25:08.096 "lvol_store_uuid": "929c7234-5cf5-4486-930a-a1fd5746c450", 00:25:08.096 "base_bdev": "nvme0n1", 00:25:08.096 "thin_provision": true, 00:25:08.096 "num_allocated_clusters": 0, 00:25:08.096 "snapshot": false, 00:25:08.096 "clone": false, 00:25:08.096 "esnap_clone": false 00:25:08.096 } 00:25:08.096 } 00:25:08.096 } 00:25:08.096 ]' 00:25:08.096 20:35:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:08.096 20:35:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:25:08.096 20:35:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:08.096 20:35:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:08.096 20:35:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:08.096 20:35:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:25:08.096 20:35:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:25:08.096 20:35:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:25:08.096 20:35:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:25:08.353 20:35:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:25:08.353 20:35:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:25:08.353 20:35:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 3d353dd7-70ef-43bf-b6a5-cb215a2e6120 00:25:08.354 20:35:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=3d353dd7-70ef-43bf-b6a5-cb215a2e6120 00:25:08.354 20:35:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:08.354 20:35:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:25:08.354 20:35:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:25:08.354 20:35:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3d353dd7-70ef-43bf-b6a5-cb215a2e6120 00:25:08.612 20:35:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:08.612 { 00:25:08.612 "name": "3d353dd7-70ef-43bf-b6a5-cb215a2e6120", 00:25:08.612 "aliases": [ 00:25:08.612 "lvs/nvme0n1p0" 00:25:08.612 ], 00:25:08.612 "product_name": "Logical Volume", 00:25:08.612 "block_size": 4096, 00:25:08.612 "num_blocks": 26476544, 00:25:08.612 "uuid": "3d353dd7-70ef-43bf-b6a5-cb215a2e6120", 00:25:08.612 "assigned_rate_limits": { 00:25:08.612 "rw_ios_per_sec": 0, 00:25:08.612 "rw_mbytes_per_sec": 0, 00:25:08.612 "r_mbytes_per_sec": 0, 00:25:08.612 "w_mbytes_per_sec": 0 00:25:08.612 }, 00:25:08.612 "claimed": false, 00:25:08.612 "zoned": false, 00:25:08.612 "supported_io_types": { 00:25:08.612 "read": true, 00:25:08.612 "write": true, 00:25:08.612 "unmap": true, 00:25:08.612 "flush": false, 00:25:08.612 "reset": true, 00:25:08.612 "nvme_admin": false, 00:25:08.612 "nvme_io": false, 00:25:08.612 "nvme_io_md": false, 00:25:08.612 "write_zeroes": true, 00:25:08.612 "zcopy": false, 00:25:08.612 "get_zone_info": false, 00:25:08.612 "zone_management": false, 00:25:08.612 "zone_append": false, 00:25:08.612 "compare": false, 00:25:08.612 "compare_and_write": false, 00:25:08.612 "abort": false, 00:25:08.612 "seek_hole": true, 00:25:08.612 "seek_data": true, 00:25:08.612 "copy": false, 00:25:08.612 "nvme_iov_md": false 00:25:08.612 }, 00:25:08.612 "driver_specific": { 00:25:08.612 "lvol": { 00:25:08.612 "lvol_store_uuid": "929c7234-5cf5-4486-930a-a1fd5746c450", 00:25:08.612 "base_bdev": "nvme0n1", 00:25:08.612 "thin_provision": true, 00:25:08.612 "num_allocated_clusters": 0, 00:25:08.612 "snapshot": false, 00:25:08.612 "clone": false, 00:25:08.612 "esnap_clone": false 00:25:08.612 } 00:25:08.612 } 00:25:08.612 } 00:25:08.612 ]' 00:25:08.612 20:35:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:08.612 20:35:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:25:08.612 20:35:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:08.612 20:35:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:08.612 20:35:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:08.612 20:35:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:25:08.612 20:35:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:25:08.612 20:35:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:25:08.870 20:35:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:25:08.870 20:35:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 3d353dd7-70ef-43bf-b6a5-cb215a2e6120 00:25:08.870 20:35:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=3d353dd7-70ef-43bf-b6a5-cb215a2e6120 00:25:08.870 20:35:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:08.870 20:35:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:25:08.870 20:35:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:25:08.870 20:35:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3d353dd7-70ef-43bf-b6a5-cb215a2e6120 00:25:09.129 20:35:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:09.129 { 00:25:09.129 "name": "3d353dd7-70ef-43bf-b6a5-cb215a2e6120", 00:25:09.129 "aliases": [ 00:25:09.129 "lvs/nvme0n1p0" 00:25:09.129 ], 00:25:09.129 "product_name": "Logical Volume", 00:25:09.129 "block_size": 4096, 00:25:09.129 "num_blocks": 26476544, 00:25:09.129 "uuid": "3d353dd7-70ef-43bf-b6a5-cb215a2e6120", 00:25:09.129 "assigned_rate_limits": { 00:25:09.129 "rw_ios_per_sec": 0, 00:25:09.129 "rw_mbytes_per_sec": 0, 00:25:09.129 "r_mbytes_per_sec": 0, 00:25:09.129 "w_mbytes_per_sec": 0 00:25:09.129 }, 00:25:09.129 "claimed": false, 00:25:09.129 "zoned": false, 00:25:09.129 "supported_io_types": { 00:25:09.129 "read": true, 00:25:09.129 "write": true, 00:25:09.129 "unmap": true, 00:25:09.129 "flush": false, 00:25:09.129 "reset": true, 00:25:09.129 "nvme_admin": false, 00:25:09.129 "nvme_io": false, 00:25:09.129 "nvme_io_md": false, 00:25:09.129 "write_zeroes": true, 00:25:09.129 "zcopy": false, 00:25:09.129 "get_zone_info": false, 00:25:09.129 "zone_management": false, 00:25:09.129 "zone_append": false, 00:25:09.129 "compare": false, 00:25:09.129 "compare_and_write": false, 00:25:09.129 "abort": false, 00:25:09.129 "seek_hole": true, 00:25:09.129 "seek_data": true, 00:25:09.129 "copy": false, 00:25:09.129 "nvme_iov_md": false 00:25:09.129 }, 00:25:09.129 "driver_specific": { 00:25:09.129 "lvol": { 00:25:09.129 "lvol_store_uuid": "929c7234-5cf5-4486-930a-a1fd5746c450", 00:25:09.129 "base_bdev": "nvme0n1", 00:25:09.129 "thin_provision": true, 00:25:09.129 "num_allocated_clusters": 0, 00:25:09.129 "snapshot": false, 00:25:09.129 "clone": false, 00:25:09.129 "esnap_clone": false 00:25:09.129 } 00:25:09.129 } 00:25:09.129 } 00:25:09.129 ]' 00:25:09.129 20:35:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:09.129 20:35:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:25:09.129 20:35:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:09.129 20:35:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:09.129 20:35:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:09.129 20:35:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:25:09.129 20:35:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:25:09.129 20:35:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 3d353dd7-70ef-43bf-b6a5-cb215a2e6120 --l2p_dram_limit 10' 00:25:09.129 20:35:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:25:09.129 20:35:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:25:09.129 20:35:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:25:09.129 20:35:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 3d353dd7-70ef-43bf-b6a5-cb215a2e6120 --l2p_dram_limit 10 -c nvc0n1p0 00:25:09.387 [2024-12-12 20:35:53.381973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.387 [2024-12-12 20:35:53.382014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:09.387 [2024-12-12 20:35:53.382027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:09.387 [2024-12-12 20:35:53.382034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.387 [2024-12-12 20:35:53.382081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.387 [2024-12-12 20:35:53.382089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:09.387 [2024-12-12 20:35:53.382097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:25:09.387 [2024-12-12 20:35:53.382103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.387 [2024-12-12 20:35:53.382123] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:09.387 [2024-12-12 20:35:53.382746] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:09.387 [2024-12-12 20:35:53.382766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.387 [2024-12-12 20:35:53.382772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:09.387 [2024-12-12 20:35:53.382780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.649 ms 00:25:09.388 [2024-12-12 20:35:53.382786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.388 [2024-12-12 20:35:53.382835] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID fcd2b5d7-4ee6-453c-9330-4be43066ede6 00:25:09.388 [2024-12-12 20:35:53.383775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.388 [2024-12-12 20:35:53.383800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:25:09.388 [2024-12-12 20:35:53.383808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:25:09.388 [2024-12-12 20:35:53.383815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.388 [2024-12-12 20:35:53.388597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.388 [2024-12-12 20:35:53.388627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:09.388 [2024-12-12 20:35:53.388635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.750 ms 00:25:09.388 [2024-12-12 20:35:53.388642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.388 [2024-12-12 20:35:53.388708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.388 [2024-12-12 20:35:53.388717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:09.388 [2024-12-12 20:35:53.388724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:25:09.388 [2024-12-12 20:35:53.388733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.388 [2024-12-12 20:35:53.388786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.388 [2024-12-12 20:35:53.388795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:09.388 [2024-12-12 20:35:53.388801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:09.388 [2024-12-12 20:35:53.388809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.388 [2024-12-12 20:35:53.388825] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:09.388 [2024-12-12 20:35:53.391684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.388 [2024-12-12 20:35:53.391710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:09.388 [2024-12-12 20:35:53.391719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.861 ms 00:25:09.388 [2024-12-12 20:35:53.391725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.388 [2024-12-12 20:35:53.391755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.388 [2024-12-12 20:35:53.391762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:09.388 [2024-12-12 20:35:53.391769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:09.388 [2024-12-12 20:35:53.391775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.388 [2024-12-12 20:35:53.391788] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:25:09.388 [2024-12-12 20:35:53.391897] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:09.388 [2024-12-12 20:35:53.391909] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:09.388 [2024-12-12 20:35:53.391918] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:09.388 [2024-12-12 20:35:53.391927] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:09.388 [2024-12-12 20:35:53.391933] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:09.388 [2024-12-12 20:35:53.391940] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:09.388 [2024-12-12 20:35:53.391946] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:09.388 [2024-12-12 20:35:53.391955] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:09.388 [2024-12-12 20:35:53.391960] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:09.388 [2024-12-12 20:35:53.391967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.388 [2024-12-12 20:35:53.391977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:09.388 [2024-12-12 20:35:53.391985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.180 ms 00:25:09.388 [2024-12-12 20:35:53.391991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.388 [2024-12-12 20:35:53.392056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.388 [2024-12-12 20:35:53.392062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:09.388 [2024-12-12 20:35:53.392068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:25:09.388 [2024-12-12 20:35:53.392073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.388 [2024-12-12 20:35:53.392149] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:09.388 [2024-12-12 20:35:53.392156] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:09.388 [2024-12-12 20:35:53.392164] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:09.388 [2024-12-12 20:35:53.392169] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:09.388 [2024-12-12 20:35:53.392176] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:09.388 [2024-12-12 20:35:53.392181] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:09.388 [2024-12-12 20:35:53.392188] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:09.388 [2024-12-12 20:35:53.392193] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:09.388 [2024-12-12 20:35:53.392199] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:09.388 [2024-12-12 20:35:53.392204] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:09.388 [2024-12-12 20:35:53.392210] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:09.388 [2024-12-12 20:35:53.392215] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:09.388 [2024-12-12 20:35:53.392222] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:09.388 [2024-12-12 20:35:53.392227] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:09.388 [2024-12-12 20:35:53.392234] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:09.388 [2024-12-12 20:35:53.392239] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:09.388 [2024-12-12 20:35:53.392247] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:09.388 [2024-12-12 20:35:53.392252] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:09.388 [2024-12-12 20:35:53.392258] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:09.388 [2024-12-12 20:35:53.392263] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:09.388 [2024-12-12 20:35:53.392269] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:09.388 [2024-12-12 20:35:53.392274] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:09.388 [2024-12-12 20:35:53.392281] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:09.388 [2024-12-12 20:35:53.392285] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:09.388 [2024-12-12 20:35:53.392291] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:09.388 [2024-12-12 20:35:53.392296] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:09.388 [2024-12-12 20:35:53.392302] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:09.388 [2024-12-12 20:35:53.392307] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:09.388 [2024-12-12 20:35:53.392312] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:09.388 [2024-12-12 20:35:53.392317] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:09.388 [2024-12-12 20:35:53.392323] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:09.388 [2024-12-12 20:35:53.392328] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:09.388 [2024-12-12 20:35:53.392335] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:09.388 [2024-12-12 20:35:53.392340] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:09.388 [2024-12-12 20:35:53.392347] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:09.388 [2024-12-12 20:35:53.392351] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:09.388 [2024-12-12 20:35:53.392357] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:09.388 [2024-12-12 20:35:53.392362] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:09.388 [2024-12-12 20:35:53.392369] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:09.388 [2024-12-12 20:35:53.392374] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:09.388 [2024-12-12 20:35:53.392380] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:09.388 [2024-12-12 20:35:53.392385] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:09.388 [2024-12-12 20:35:53.392391] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:09.388 [2024-12-12 20:35:53.392395] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:09.388 [2024-12-12 20:35:53.392402] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:09.388 [2024-12-12 20:35:53.392407] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:09.388 [2024-12-12 20:35:53.392429] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:09.388 [2024-12-12 20:35:53.392435] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:09.388 [2024-12-12 20:35:53.392443] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:09.388 [2024-12-12 20:35:53.392449] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:09.388 [2024-12-12 20:35:53.392455] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:09.388 [2024-12-12 20:35:53.392460] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:09.388 [2024-12-12 20:35:53.392466] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:09.388 [2024-12-12 20:35:53.392473] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:09.388 [2024-12-12 20:35:53.392481] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:09.388 [2024-12-12 20:35:53.392489] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:09.388 [2024-12-12 20:35:53.392496] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:09.388 [2024-12-12 20:35:53.392501] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:09.388 [2024-12-12 20:35:53.392507] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:09.389 [2024-12-12 20:35:53.392513] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:09.389 [2024-12-12 20:35:53.392520] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:09.389 [2024-12-12 20:35:53.392526] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:09.389 [2024-12-12 20:35:53.392533] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:09.389 [2024-12-12 20:35:53.392538] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:09.389 [2024-12-12 20:35:53.392546] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:09.389 [2024-12-12 20:35:53.392552] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:09.389 [2024-12-12 20:35:53.392558] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:09.389 [2024-12-12 20:35:53.392563] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:09.389 [2024-12-12 20:35:53.392570] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:09.389 [2024-12-12 20:35:53.392575] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:09.389 [2024-12-12 20:35:53.392583] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:09.389 [2024-12-12 20:35:53.392589] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:09.389 [2024-12-12 20:35:53.392596] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:09.389 [2024-12-12 20:35:53.392601] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:09.389 [2024-12-12 20:35:53.392608] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:09.389 [2024-12-12 20:35:53.392613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.389 [2024-12-12 20:35:53.392620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:09.389 [2024-12-12 20:35:53.392625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.518 ms 00:25:09.389 [2024-12-12 20:35:53.392633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.389 [2024-12-12 20:35:53.392661] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:25:09.389 [2024-12-12 20:35:53.392671] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:25:11.917 [2024-12-12 20:35:56.102621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.917 [2024-12-12 20:35:56.102682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:25:11.917 [2024-12-12 20:35:56.102697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2709.948 ms 00:25:11.917 [2024-12-12 20:35:56.102707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.917 [2024-12-12 20:35:56.127931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.917 [2024-12-12 20:35:56.127980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:11.917 [2024-12-12 20:35:56.127992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.023 ms 00:25:11.917 [2024-12-12 20:35:56.128002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.917 [2024-12-12 20:35:56.128127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.917 [2024-12-12 20:35:56.128140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:11.917 [2024-12-12 20:35:56.128149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:25:11.917 [2024-12-12 20:35:56.128162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.176 [2024-12-12 20:35:56.158469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:12.176 [2024-12-12 20:35:56.158509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:12.176 [2024-12-12 20:35:56.158519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.272 ms 00:25:12.176 [2024-12-12 20:35:56.158528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.176 [2024-12-12 20:35:56.158557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:12.176 [2024-12-12 20:35:56.158571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:12.176 [2024-12-12 20:35:56.158579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:12.176 [2024-12-12 20:35:56.158593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.176 [2024-12-12 20:35:56.158938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:12.176 [2024-12-12 20:35:56.158956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:12.176 [2024-12-12 20:35:56.158965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.300 ms 00:25:12.176 [2024-12-12 20:35:56.158974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.176 [2024-12-12 20:35:56.159073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:12.176 [2024-12-12 20:35:56.159083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:12.176 [2024-12-12 20:35:56.159093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:25:12.176 [2024-12-12 20:35:56.159103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.176 [2024-12-12 20:35:56.172953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:12.176 [2024-12-12 20:35:56.172988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:12.176 [2024-12-12 20:35:56.172997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.833 ms 00:25:12.176 [2024-12-12 20:35:56.173006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.176 [2024-12-12 20:35:56.194163] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:12.176 [2024-12-12 20:35:56.197060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:12.176 [2024-12-12 20:35:56.197093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:12.176 [2024-12-12 20:35:56.197108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.984 ms 00:25:12.176 [2024-12-12 20:35:56.197116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.176 [2024-12-12 20:35:56.265924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:12.176 [2024-12-12 20:35:56.265975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:25:12.176 [2024-12-12 20:35:56.265989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.765 ms 00:25:12.176 [2024-12-12 20:35:56.265998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.176 [2024-12-12 20:35:56.266201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:12.176 [2024-12-12 20:35:56.266215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:12.176 [2024-12-12 20:35:56.266227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.140 ms 00:25:12.176 [2024-12-12 20:35:56.266235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.176 [2024-12-12 20:35:56.289836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:12.176 [2024-12-12 20:35:56.289872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:25:12.176 [2024-12-12 20:35:56.289884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.565 ms 00:25:12.176 [2024-12-12 20:35:56.289892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.176 [2024-12-12 20:35:56.312814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:12.176 [2024-12-12 20:35:56.312847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:25:12.176 [2024-12-12 20:35:56.312859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.882 ms 00:25:12.176 [2024-12-12 20:35:56.312866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.176 [2024-12-12 20:35:56.313435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:12.176 [2024-12-12 20:35:56.313451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:12.176 [2024-12-12 20:35:56.313462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.536 ms 00:25:12.176 [2024-12-12 20:35:56.313471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.176 [2024-12-12 20:35:56.383680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:12.176 [2024-12-12 20:35:56.383719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:25:12.176 [2024-12-12 20:35:56.383734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 70.175 ms 00:25:12.176 [2024-12-12 20:35:56.383742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.435 [2024-12-12 20:35:56.408262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:12.435 [2024-12-12 20:35:56.408301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:25:12.435 [2024-12-12 20:35:56.408314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.449 ms 00:25:12.435 [2024-12-12 20:35:56.408321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.435 [2024-12-12 20:35:56.431916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:12.435 [2024-12-12 20:35:56.431955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:25:12.435 [2024-12-12 20:35:56.431968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.555 ms 00:25:12.435 [2024-12-12 20:35:56.431976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.435 [2024-12-12 20:35:56.456043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:12.435 [2024-12-12 20:35:56.456077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:12.435 [2024-12-12 20:35:56.456090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.030 ms 00:25:12.435 [2024-12-12 20:35:56.456097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.435 [2024-12-12 20:35:56.456134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:12.435 [2024-12-12 20:35:56.456143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:12.435 [2024-12-12 20:35:56.456156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:12.435 [2024-12-12 20:35:56.456163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.435 [2024-12-12 20:35:56.456242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:12.435 [2024-12-12 20:35:56.456255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:12.435 [2024-12-12 20:35:56.456264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:25:12.435 [2024-12-12 20:35:56.456271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.435 [2024-12-12 20:35:56.457078] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3074.704 ms, result 0 00:25:12.435 { 00:25:12.435 "name": "ftl0", 00:25:12.435 "uuid": "fcd2b5d7-4ee6-453c-9330-4be43066ede6" 00:25:12.435 } 00:25:12.435 20:35:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:25:12.435 20:35:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:25:12.694 20:35:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:25:12.694 20:35:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:25:12.694 20:35:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:25:12.694 /dev/nbd0 00:25:12.694 20:35:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:25:12.694 20:35:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:25:12.694 20:35:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:25:12.694 20:35:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:25:12.694 20:35:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:25:12.694 20:35:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:25:12.694 20:35:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:25:12.694 20:35:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:25:12.694 20:35:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:25:12.694 20:35:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:25:12.694 1+0 records in 00:25:12.694 1+0 records out 00:25:12.694 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000344736 s, 11.9 MB/s 00:25:12.694 20:35:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:25:12.694 20:35:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:25:12.694 20:35:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:25:12.694 20:35:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:25:12.694 20:35:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:25:12.694 20:35:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:25:12.952 [2024-12-12 20:35:56.963549] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:25:12.952 [2024-12-12 20:35:56.963664] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81667 ] 00:25:12.952 [2024-12-12 20:35:57.124049] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:13.211 [2024-12-12 20:35:57.222879] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:25:14.585  [2024-12-12T20:35:59.747Z] Copying: 194/1024 [MB] (194 MBps) [2024-12-12T20:36:00.708Z] Copying: 389/1024 [MB] (194 MBps) [2024-12-12T20:36:01.641Z] Copying: 570/1024 [MB] (181 MBps) [2024-12-12T20:36:02.574Z] Copying: 776/1024 [MB] (205 MBps) [2024-12-12T20:36:03.140Z] Copying: 1024/1024 [MB] (average 206 MBps) 00:25:18.912 00:25:18.912 20:36:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:25:21.440 20:36:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:25:21.440 [2024-12-12 20:36:05.124831] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:25:21.440 [2024-12-12 20:36:05.124924] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81755 ] 00:25:21.440 [2024-12-12 20:36:05.280657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:21.440 [2024-12-12 20:36:05.378321] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:25:22.373  [2024-12-12T20:36:07.996Z] Copying: 19/1024 [MB] (19 MBps) [2024-12-12T20:36:08.929Z] Copying: 44/1024 [MB] (24 MBps) [2024-12-12T20:36:09.863Z] Copying: 73/1024 [MB] (29 MBps) [2024-12-12T20:36:10.796Z] Copying: 102/1024 [MB] (29 MBps) [2024-12-12T20:36:11.729Z] Copying: 132/1024 [MB] (30 MBps) [2024-12-12T20:36:12.664Z] Copying: 166/1024 [MB] (33 MBps) [2024-12-12T20:36:13.597Z] Copying: 197/1024 [MB] (31 MBps) [2024-12-12T20:36:14.972Z] Copying: 227/1024 [MB] (29 MBps) [2024-12-12T20:36:15.904Z] Copying: 257/1024 [MB] (29 MBps) [2024-12-12T20:36:16.838Z] Copying: 286/1024 [MB] (29 MBps) [2024-12-12T20:36:17.770Z] Copying: 316/1024 [MB] (29 MBps) [2024-12-12T20:36:18.703Z] Copying: 346/1024 [MB] (30 MBps) [2024-12-12T20:36:19.636Z] Copying: 379/1024 [MB] (33 MBps) [2024-12-12T20:36:21.008Z] Copying: 415/1024 [MB] (35 MBps) [2024-12-12T20:36:21.942Z] Copying: 446/1024 [MB] (30 MBps) [2024-12-12T20:36:22.876Z] Copying: 475/1024 [MB] (29 MBps) [2024-12-12T20:36:23.810Z] Copying: 505/1024 [MB] (29 MBps) [2024-12-12T20:36:24.744Z] Copying: 535/1024 [MB] (29 MBps) [2024-12-12T20:36:25.676Z] Copying: 566/1024 [MB] (31 MBps) [2024-12-12T20:36:26.610Z] Copying: 596/1024 [MB] (29 MBps) [2024-12-12T20:36:28.017Z] Copying: 625/1024 [MB] (28 MBps) [2024-12-12T20:36:28.950Z] Copying: 654/1024 [MB] (29 MBps) [2024-12-12T20:36:29.885Z] Copying: 686/1024 [MB] (32 MBps) [2024-12-12T20:36:30.819Z] Copying: 718/1024 [MB] (31 MBps) [2024-12-12T20:36:31.753Z] Copying: 747/1024 [MB] (29 MBps) [2024-12-12T20:36:32.686Z] Copying: 776/1024 [MB] (29 MBps) [2024-12-12T20:36:33.624Z] Copying: 806/1024 [MB] (30 MBps) [2024-12-12T20:36:35.031Z] Copying: 836/1024 [MB] (29 MBps) [2024-12-12T20:36:35.963Z] Copying: 867/1024 [MB] (30 MBps) [2024-12-12T20:36:36.896Z] Copying: 897/1024 [MB] (29 MBps) [2024-12-12T20:36:37.835Z] Copying: 928/1024 [MB] (30 MBps) [2024-12-12T20:36:38.768Z] Copying: 957/1024 [MB] (29 MBps) [2024-12-12T20:36:39.701Z] Copying: 988/1024 [MB] (30 MBps) [2024-12-12T20:36:39.959Z] Copying: 1018/1024 [MB] (29 MBps) [2024-12-12T20:36:40.564Z] Copying: 1024/1024 [MB] (average 29 MBps) 00:25:56.336 00:25:56.336 20:36:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:25:56.336 20:36:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:25:56.596 20:36:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:25:56.596 [2024-12-12 20:36:40.774440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.596 [2024-12-12 20:36:40.774483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:56.596 [2024-12-12 20:36:40.774494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:56.596 [2024-12-12 20:36:40.774503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.596 [2024-12-12 20:36:40.774523] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:56.596 [2024-12-12 20:36:40.776647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.596 [2024-12-12 20:36:40.776670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:56.596 [2024-12-12 20:36:40.776680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.108 ms 00:25:56.596 [2024-12-12 20:36:40.776687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.596 [2024-12-12 20:36:40.778639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.596 [2024-12-12 20:36:40.778662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:56.596 [2024-12-12 20:36:40.778672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.929 ms 00:25:56.596 [2024-12-12 20:36:40.778679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.596 [2024-12-12 20:36:40.790705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.596 [2024-12-12 20:36:40.790731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:56.596 [2024-12-12 20:36:40.790742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.010 ms 00:25:56.596 [2024-12-12 20:36:40.790748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.596 [2024-12-12 20:36:40.795500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.596 [2024-12-12 20:36:40.795520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:56.596 [2024-12-12 20:36:40.795529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.723 ms 00:25:56.596 [2024-12-12 20:36:40.795535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.596 [2024-12-12 20:36:40.813947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.596 [2024-12-12 20:36:40.813973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:56.596 [2024-12-12 20:36:40.813983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.358 ms 00:25:56.596 [2024-12-12 20:36:40.813989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.855 [2024-12-12 20:36:40.825884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.855 [2024-12-12 20:36:40.825910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:56.855 [2024-12-12 20:36:40.825923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.860 ms 00:25:56.855 [2024-12-12 20:36:40.825929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.855 [2024-12-12 20:36:40.826037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.855 [2024-12-12 20:36:40.826045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:56.855 [2024-12-12 20:36:40.826052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:25:56.855 [2024-12-12 20:36:40.826058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.855 [2024-12-12 20:36:40.843568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.855 [2024-12-12 20:36:40.843591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:56.855 [2024-12-12 20:36:40.843600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.494 ms 00:25:56.855 [2024-12-12 20:36:40.843606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.855 [2024-12-12 20:36:40.860860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.855 [2024-12-12 20:36:40.860883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:56.855 [2024-12-12 20:36:40.860893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.225 ms 00:25:56.855 [2024-12-12 20:36:40.860898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.855 [2024-12-12 20:36:40.957639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.855 [2024-12-12 20:36:40.957663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:56.855 [2024-12-12 20:36:40.957673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 96.709 ms 00:25:56.855 [2024-12-12 20:36:40.957678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.855 [2024-12-12 20:36:40.974879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.855 [2024-12-12 20:36:40.974902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:56.855 [2024-12-12 20:36:40.974911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.143 ms 00:25:56.855 [2024-12-12 20:36:40.974917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.855 [2024-12-12 20:36:40.974944] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:56.855 [2024-12-12 20:36:40.974956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:56.855 [2024-12-12 20:36:40.974965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:56.855 [2024-12-12 20:36:40.974971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:56.855 [2024-12-12 20:36:40.974979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:56.855 [2024-12-12 20:36:40.974984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:56.855 [2024-12-12 20:36:40.974992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:56.855 [2024-12-12 20:36:40.974998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:56.855 [2024-12-12 20:36:40.975006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:56.855 [2024-12-12 20:36:40.975012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:56.855 [2024-12-12 20:36:40.975019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:56.855 [2024-12-12 20:36:40.975024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:56.856 [2024-12-12 20:36:40.975031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:56.856 [2024-12-12 20:36:40.975037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:56.856 [2024-12-12 20:36:40.975044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:56.856 [2024-12-12 20:36:40.975050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:56.856 [2024-12-12 20:36:40.975057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:56.856 [2024-12-12 20:36:40.975062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:56.856 [2024-12-12 20:36:40.975069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:56.856 [2024-12-12 20:36:40.975075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:56.856 [2024-12-12 20:36:40.975081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:56.856 [2024-12-12 20:36:40.975087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:56.856 [2024-12-12 20:36:40.975096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:56.856 [2024-12-12 20:36:40.975102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:56.856 [2024-12-12 20:36:40.975110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:56.856 [2024-12-12 20:36:40.975115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:56.856 [2024-12-12 20:36:40.975122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:56.856 [2024-12-12 20:36:40.975129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:56.856 [2024-12-12 20:36:40.975142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:56.856 [2024-12-12 20:36:40.975148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:56.856 [2024-12-12 20:36:40.975155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:56.856 [2024-12-12 20:36:40.975161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:56.856 [2024-12-12 20:36:40.975168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:56.856 [2024-12-12 20:36:40.975174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:56.856 [2024-12-12 20:36:40.975181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:56.856 [2024-12-12 20:36:40.975186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:56.856 [2024-12-12 20:36:40.975193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:56.856 [2024-12-12 20:36:40.975198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:56.856 [2024-12-12 20:36:40.975205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:56.856 [2024-12-12 20:36:40.975211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:56.856 [2024-12-12 20:36:40.975220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:56.856 [2024-12-12 20:36:40.975225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:56.856 [2024-12-12 20:36:40.975232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:56.856 [2024-12-12 20:36:40.975238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:56.856 [2024-12-12 20:36:40.975244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:56.856 [2024-12-12 20:36:40.975250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:56.856 [2024-12-12 20:36:40.975257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:56.856 [2024-12-12 20:36:40.975268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:56.856 [2024-12-12 20:36:40.975275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:56.856 [2024-12-12 20:36:40.975280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:56.856 [2024-12-12 20:36:40.975287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:56.856 [2024-12-12 20:36:40.975292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:56.856 [2024-12-12 20:36:40.975298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:56.856 [2024-12-12 20:36:40.975304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:56.856 [2024-12-12 20:36:40.975311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:56.856 [2024-12-12 20:36:40.975317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:56.856 [2024-12-12 20:36:40.975325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:56.856 [2024-12-12 20:36:40.975331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:56.856 [2024-12-12 20:36:40.975337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:56.856 [2024-12-12 20:36:40.975343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:56.856 [2024-12-12 20:36:40.975352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:56.856 [2024-12-12 20:36:40.975357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:56.856 [2024-12-12 20:36:40.975364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:56.856 [2024-12-12 20:36:40.975370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:56.856 [2024-12-12 20:36:40.975376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:56.856 [2024-12-12 20:36:40.975382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:56.856 [2024-12-12 20:36:40.975389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:56.856 [2024-12-12 20:36:40.975395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:56.856 [2024-12-12 20:36:40.975401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:56.856 [2024-12-12 20:36:40.975407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:56.856 [2024-12-12 20:36:40.975422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:56.856 [2024-12-12 20:36:40.975428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:56.856 [2024-12-12 20:36:40.975437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:56.856 [2024-12-12 20:36:40.975443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:56.856 [2024-12-12 20:36:40.975449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:56.856 [2024-12-12 20:36:40.975455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:56.856 [2024-12-12 20:36:40.975462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:56.856 [2024-12-12 20:36:40.975467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:56.856 [2024-12-12 20:36:40.975474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:56.856 [2024-12-12 20:36:40.975479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:56.856 [2024-12-12 20:36:40.975486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:56.856 [2024-12-12 20:36:40.975492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:56.856 [2024-12-12 20:36:40.975499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:56.856 [2024-12-12 20:36:40.975504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:56.856 [2024-12-12 20:36:40.975512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:56.856 [2024-12-12 20:36:40.975518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:56.856 [2024-12-12 20:36:40.975530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:56.856 [2024-12-12 20:36:40.975536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:56.856 [2024-12-12 20:36:40.975544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:56.856 [2024-12-12 20:36:40.975550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:56.856 [2024-12-12 20:36:40.975557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:56.856 [2024-12-12 20:36:40.975563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:56.856 [2024-12-12 20:36:40.975570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:56.856 [2024-12-12 20:36:40.975576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:56.856 [2024-12-12 20:36:40.975583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:56.856 [2024-12-12 20:36:40.975589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:56.856 [2024-12-12 20:36:40.975595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:56.856 [2024-12-12 20:36:40.975601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:56.856 [2024-12-12 20:36:40.975608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:56.856 [2024-12-12 20:36:40.975614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:56.856 [2024-12-12 20:36:40.975621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:56.856 [2024-12-12 20:36:40.975634] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:56.856 [2024-12-12 20:36:40.975640] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: fcd2b5d7-4ee6-453c-9330-4be43066ede6 00:25:56.857 [2024-12-12 20:36:40.975646] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:56.857 [2024-12-12 20:36:40.975654] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:56.857 [2024-12-12 20:36:40.975659] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:56.857 [2024-12-12 20:36:40.975667] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:56.857 [2024-12-12 20:36:40.975673] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:56.857 [2024-12-12 20:36:40.975680] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:56.857 [2024-12-12 20:36:40.975685] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:56.857 [2024-12-12 20:36:40.975691] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:56.857 [2024-12-12 20:36:40.975696] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:56.857 [2024-12-12 20:36:40.975702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.857 [2024-12-12 20:36:40.975708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:56.857 [2024-12-12 20:36:40.975715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.759 ms 00:25:56.857 [2024-12-12 20:36:40.975720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.857 [2024-12-12 20:36:40.985215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.857 [2024-12-12 20:36:40.985238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:56.857 [2024-12-12 20:36:40.985246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.469 ms 00:25:56.857 [2024-12-12 20:36:40.985251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.857 [2024-12-12 20:36:40.985532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.857 [2024-12-12 20:36:40.985540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:56.857 [2024-12-12 20:36:40.985548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:25:56.857 [2024-12-12 20:36:40.985553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.857 [2024-12-12 20:36:41.018213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:56.857 [2024-12-12 20:36:41.018237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:56.857 [2024-12-12 20:36:41.018247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:56.857 [2024-12-12 20:36:41.018253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.857 [2024-12-12 20:36:41.018300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:56.857 [2024-12-12 20:36:41.018306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:56.857 [2024-12-12 20:36:41.018313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:56.857 [2024-12-12 20:36:41.018319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.857 [2024-12-12 20:36:41.018381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:56.857 [2024-12-12 20:36:41.018390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:56.857 [2024-12-12 20:36:41.018397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:56.857 [2024-12-12 20:36:41.018403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.857 [2024-12-12 20:36:41.018434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:56.857 [2024-12-12 20:36:41.018441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:56.857 [2024-12-12 20:36:41.018448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:56.857 [2024-12-12 20:36:41.018453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.857 [2024-12-12 20:36:41.077196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:56.857 [2024-12-12 20:36:41.077228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:56.857 [2024-12-12 20:36:41.077237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:56.857 [2024-12-12 20:36:41.077243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.115 [2024-12-12 20:36:41.124645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.115 [2024-12-12 20:36:41.124679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:57.115 [2024-12-12 20:36:41.124688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.115 [2024-12-12 20:36:41.124695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.115 [2024-12-12 20:36:41.124755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.115 [2024-12-12 20:36:41.124762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:57.115 [2024-12-12 20:36:41.124772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.115 [2024-12-12 20:36:41.124778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.115 [2024-12-12 20:36:41.124825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.115 [2024-12-12 20:36:41.124833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:57.115 [2024-12-12 20:36:41.124840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.115 [2024-12-12 20:36:41.124845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.115 [2024-12-12 20:36:41.124915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.115 [2024-12-12 20:36:41.124922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:57.115 [2024-12-12 20:36:41.124930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.115 [2024-12-12 20:36:41.124937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.115 [2024-12-12 20:36:41.124962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.115 [2024-12-12 20:36:41.124969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:57.115 [2024-12-12 20:36:41.124976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.115 [2024-12-12 20:36:41.124981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.115 [2024-12-12 20:36:41.125014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.115 [2024-12-12 20:36:41.125020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:57.115 [2024-12-12 20:36:41.125027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.115 [2024-12-12 20:36:41.125035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.115 [2024-12-12 20:36:41.125070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.115 [2024-12-12 20:36:41.125077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:57.115 [2024-12-12 20:36:41.125084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.115 [2024-12-12 20:36:41.125090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.115 [2024-12-12 20:36:41.125193] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 350.733 ms, result 0 00:25:57.115 true 00:25:57.115 20:36:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 81530 00:25:57.116 20:36:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid81530 00:25:57.116 20:36:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:25:57.116 [2024-12-12 20:36:41.214927] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:25:57.116 [2024-12-12 20:36:41.215047] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82130 ] 00:25:57.374 [2024-12-12 20:36:41.371652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:57.374 [2024-12-12 20:36:41.448466] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:25:58.746  [2024-12-12T20:36:43.907Z] Copying: 255/1024 [MB] (255 MBps) [2024-12-12T20:36:44.840Z] Copying: 513/1024 [MB] (257 MBps) [2024-12-12T20:36:45.773Z] Copying: 769/1024 [MB] (256 MBps) [2024-12-12T20:36:46.338Z] Copying: 1024/1024 [MB] (average 257 MBps) 00:26:02.110 00:26:02.110 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 81530 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:26:02.110 20:36:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:02.111 [2024-12-12 20:36:46.232095] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:26:02.111 [2024-12-12 20:36:46.232216] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82185 ] 00:26:02.368 [2024-12-12 20:36:46.387955] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:02.368 [2024-12-12 20:36:46.463342] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:26:02.627 [2024-12-12 20:36:46.674506] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:02.627 [2024-12-12 20:36:46.674560] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:02.627 [2024-12-12 20:36:46.737108] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:26:02.627 [2024-12-12 20:36:46.737386] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:26:02.627 [2024-12-12 20:36:46.737622] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:26:02.885 [2024-12-12 20:36:46.910150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.885 [2024-12-12 20:36:46.910196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:02.885 [2024-12-12 20:36:46.910207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:02.885 [2024-12-12 20:36:46.910215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.885 [2024-12-12 20:36:46.910250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.885 [2024-12-12 20:36:46.910258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:02.885 [2024-12-12 20:36:46.910264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:26:02.885 [2024-12-12 20:36:46.910270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.885 [2024-12-12 20:36:46.910282] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:02.885 [2024-12-12 20:36:46.910788] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:02.885 [2024-12-12 20:36:46.910806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.885 [2024-12-12 20:36:46.910813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:02.885 [2024-12-12 20:36:46.910819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.527 ms 00:26:02.885 [2024-12-12 20:36:46.910825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.885 [2024-12-12 20:36:46.911767] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:02.885 [2024-12-12 20:36:46.921501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.885 [2024-12-12 20:36:46.921531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:02.885 [2024-12-12 20:36:46.921540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.735 ms 00:26:02.885 [2024-12-12 20:36:46.921547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.885 [2024-12-12 20:36:46.921590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.885 [2024-12-12 20:36:46.921598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:02.885 [2024-12-12 20:36:46.921604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:26:02.885 [2024-12-12 20:36:46.921610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.885 [2024-12-12 20:36:46.925926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.885 [2024-12-12 20:36:46.925953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:02.885 [2024-12-12 20:36:46.925960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.279 ms 00:26:02.885 [2024-12-12 20:36:46.925966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.885 [2024-12-12 20:36:46.926020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.885 [2024-12-12 20:36:46.926027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:02.885 [2024-12-12 20:36:46.926033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:26:02.885 [2024-12-12 20:36:46.926041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.885 [2024-12-12 20:36:46.926078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.885 [2024-12-12 20:36:46.926085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:02.885 [2024-12-12 20:36:46.926091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:02.885 [2024-12-12 20:36:46.926096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.886 [2024-12-12 20:36:46.926123] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:02.886 [2024-12-12 20:36:46.928671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.886 [2024-12-12 20:36:46.928695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:02.886 [2024-12-12 20:36:46.928702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.552 ms 00:26:02.886 [2024-12-12 20:36:46.928708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.886 [2024-12-12 20:36:46.928736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.886 [2024-12-12 20:36:46.928743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:02.886 [2024-12-12 20:36:46.928750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:26:02.886 [2024-12-12 20:36:46.928757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.886 [2024-12-12 20:36:46.928771] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:02.886 [2024-12-12 20:36:46.928786] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:02.886 [2024-12-12 20:36:46.928812] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:02.886 [2024-12-12 20:36:46.928824] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:02.886 [2024-12-12 20:36:46.928902] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:02.886 [2024-12-12 20:36:46.928910] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:02.886 [2024-12-12 20:36:46.928920] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:02.886 [2024-12-12 20:36:46.928928] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:02.886 [2024-12-12 20:36:46.928935] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:02.886 [2024-12-12 20:36:46.928941] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:02.886 [2024-12-12 20:36:46.928946] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:02.886 [2024-12-12 20:36:46.928952] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:02.886 [2024-12-12 20:36:46.928957] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:02.886 [2024-12-12 20:36:46.928963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.886 [2024-12-12 20:36:46.928968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:02.886 [2024-12-12 20:36:46.928974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.193 ms 00:26:02.886 [2024-12-12 20:36:46.928979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.886 [2024-12-12 20:36:46.929044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.886 [2024-12-12 20:36:46.929050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:02.886 [2024-12-12 20:36:46.929056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:26:02.886 [2024-12-12 20:36:46.929062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.886 [2024-12-12 20:36:46.929134] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:02.886 [2024-12-12 20:36:46.929141] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:02.886 [2024-12-12 20:36:46.929147] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:02.886 [2024-12-12 20:36:46.929152] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:02.886 [2024-12-12 20:36:46.929158] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:02.886 [2024-12-12 20:36:46.929163] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:02.886 [2024-12-12 20:36:46.929168] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:02.886 [2024-12-12 20:36:46.929173] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:02.886 [2024-12-12 20:36:46.929178] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:02.886 [2024-12-12 20:36:46.929188] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:02.886 [2024-12-12 20:36:46.929193] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:02.886 [2024-12-12 20:36:46.929198] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:02.886 [2024-12-12 20:36:46.929206] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:02.886 [2024-12-12 20:36:46.929212] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:02.886 [2024-12-12 20:36:46.929217] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:02.886 [2024-12-12 20:36:46.929222] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:02.886 [2024-12-12 20:36:46.929227] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:02.886 [2024-12-12 20:36:46.929232] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:02.886 [2024-12-12 20:36:46.929237] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:02.886 [2024-12-12 20:36:46.929243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:02.886 [2024-12-12 20:36:46.929248] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:02.886 [2024-12-12 20:36:46.929252] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:02.886 [2024-12-12 20:36:46.929258] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:02.886 [2024-12-12 20:36:46.929263] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:02.886 [2024-12-12 20:36:46.929268] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:02.886 [2024-12-12 20:36:46.929273] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:02.886 [2024-12-12 20:36:46.929278] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:02.886 [2024-12-12 20:36:46.929283] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:02.886 [2024-12-12 20:36:46.929287] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:02.886 [2024-12-12 20:36:46.929292] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:02.886 [2024-12-12 20:36:46.929297] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:02.886 [2024-12-12 20:36:46.929302] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:02.886 [2024-12-12 20:36:46.929307] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:02.886 [2024-12-12 20:36:46.929311] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:02.886 [2024-12-12 20:36:46.929316] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:02.886 [2024-12-12 20:36:46.929321] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:02.886 [2024-12-12 20:36:46.929325] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:02.886 [2024-12-12 20:36:46.929330] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:02.886 [2024-12-12 20:36:46.929335] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:02.886 [2024-12-12 20:36:46.929340] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:02.886 [2024-12-12 20:36:46.929345] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:02.886 [2024-12-12 20:36:46.929350] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:02.886 [2024-12-12 20:36:46.929355] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:02.886 [2024-12-12 20:36:46.929359] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:02.886 [2024-12-12 20:36:46.929369] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:02.886 [2024-12-12 20:36:46.929375] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:02.886 [2024-12-12 20:36:46.929380] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:02.886 [2024-12-12 20:36:46.929386] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:02.886 [2024-12-12 20:36:46.929391] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:02.886 [2024-12-12 20:36:46.929396] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:02.886 [2024-12-12 20:36:46.929401] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:02.886 [2024-12-12 20:36:46.929406] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:02.886 [2024-12-12 20:36:46.929411] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:02.886 [2024-12-12 20:36:46.929429] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:02.886 [2024-12-12 20:36:46.929436] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:02.886 [2024-12-12 20:36:46.929442] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:02.886 [2024-12-12 20:36:46.929448] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:02.886 [2024-12-12 20:36:46.929454] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:02.886 [2024-12-12 20:36:46.929460] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:02.886 [2024-12-12 20:36:46.929465] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:02.886 [2024-12-12 20:36:46.929470] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:02.886 [2024-12-12 20:36:46.929476] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:02.886 [2024-12-12 20:36:46.929481] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:02.886 [2024-12-12 20:36:46.929486] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:02.886 [2024-12-12 20:36:46.929492] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:02.886 [2024-12-12 20:36:46.929498] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:02.886 [2024-12-12 20:36:46.929503] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:02.886 [2024-12-12 20:36:46.929508] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:02.886 [2024-12-12 20:36:46.929514] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:02.886 [2024-12-12 20:36:46.929519] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:02.886 [2024-12-12 20:36:46.929525] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:02.887 [2024-12-12 20:36:46.929531] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:02.887 [2024-12-12 20:36:46.929537] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:02.887 [2024-12-12 20:36:46.929542] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:02.887 [2024-12-12 20:36:46.929549] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:02.887 [2024-12-12 20:36:46.929554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.887 [2024-12-12 20:36:46.929564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:02.887 [2024-12-12 20:36:46.929569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.473 ms 00:26:02.887 [2024-12-12 20:36:46.929577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.887 [2024-12-12 20:36:46.950179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.887 [2024-12-12 20:36:46.950207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:02.887 [2024-12-12 20:36:46.950216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.569 ms 00:26:02.887 [2024-12-12 20:36:46.950224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.887 [2024-12-12 20:36:46.950289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.887 [2024-12-12 20:36:46.950296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:02.887 [2024-12-12 20:36:46.950303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:26:02.887 [2024-12-12 20:36:46.950308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.887 [2024-12-12 20:36:46.993558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.887 [2024-12-12 20:36:46.993603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:02.887 [2024-12-12 20:36:46.993614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.205 ms 00:26:02.887 [2024-12-12 20:36:46.993620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.887 [2024-12-12 20:36:46.993667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.887 [2024-12-12 20:36:46.993674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:02.887 [2024-12-12 20:36:46.993681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:26:02.887 [2024-12-12 20:36:46.993687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.887 [2024-12-12 20:36:46.994014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.887 [2024-12-12 20:36:46.994035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:02.887 [2024-12-12 20:36:46.994048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.280 ms 00:26:02.887 [2024-12-12 20:36:46.994053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.887 [2024-12-12 20:36:46.994159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.887 [2024-12-12 20:36:46.994181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:02.887 [2024-12-12 20:36:46.994187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:26:02.887 [2024-12-12 20:36:46.994193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.887 [2024-12-12 20:36:47.004553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.887 [2024-12-12 20:36:47.004581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:02.887 [2024-12-12 20:36:47.004590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.342 ms 00:26:02.887 [2024-12-12 20:36:47.004596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.887 [2024-12-12 20:36:47.014344] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:02.887 [2024-12-12 20:36:47.014377] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:02.887 [2024-12-12 20:36:47.014386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.887 [2024-12-12 20:36:47.014393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:02.887 [2024-12-12 20:36:47.014400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.698 ms 00:26:02.887 [2024-12-12 20:36:47.014405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.887 [2024-12-12 20:36:47.033498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.887 [2024-12-12 20:36:47.033536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:02.887 [2024-12-12 20:36:47.033545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.052 ms 00:26:02.887 [2024-12-12 20:36:47.033551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.887 [2024-12-12 20:36:47.042431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.887 [2024-12-12 20:36:47.042460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:02.887 [2024-12-12 20:36:47.042468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.838 ms 00:26:02.887 [2024-12-12 20:36:47.042474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.887 [2024-12-12 20:36:47.051169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.887 [2024-12-12 20:36:47.051197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:02.887 [2024-12-12 20:36:47.051205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.667 ms 00:26:02.887 [2024-12-12 20:36:47.051211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.887 [2024-12-12 20:36:47.051719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.887 [2024-12-12 20:36:47.051739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:02.887 [2024-12-12 20:36:47.051746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.449 ms 00:26:02.887 [2024-12-12 20:36:47.051752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.887 [2024-12-12 20:36:47.094964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.887 [2024-12-12 20:36:47.095012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:02.887 [2024-12-12 20:36:47.095022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.197 ms 00:26:02.887 [2024-12-12 20:36:47.095029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.887 [2024-12-12 20:36:47.103000] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:02.887 [2024-12-12 20:36:47.104939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.887 [2024-12-12 20:36:47.104959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:02.887 [2024-12-12 20:36:47.104972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.872 ms 00:26:02.887 [2024-12-12 20:36:47.104979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.887 [2024-12-12 20:36:47.105047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.887 [2024-12-12 20:36:47.105056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:02.887 [2024-12-12 20:36:47.105063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:26:02.887 [2024-12-12 20:36:47.105069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.887 [2024-12-12 20:36:47.105124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.887 [2024-12-12 20:36:47.105132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:02.887 [2024-12-12 20:36:47.105138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:26:02.887 [2024-12-12 20:36:47.105144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.887 [2024-12-12 20:36:47.105169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.887 [2024-12-12 20:36:47.105175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:02.887 [2024-12-12 20:36:47.105181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:02.887 [2024-12-12 20:36:47.105187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.887 [2024-12-12 20:36:47.105211] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:02.887 [2024-12-12 20:36:47.105219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.887 [2024-12-12 20:36:47.105224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:02.887 [2024-12-12 20:36:47.105233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:26:02.887 [2024-12-12 20:36:47.105238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.145 [2024-12-12 20:36:47.122958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.145 [2024-12-12 20:36:47.122993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:03.145 [2024-12-12 20:36:47.123003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.704 ms 00:26:03.145 [2024-12-12 20:36:47.123009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.145 [2024-12-12 20:36:47.123066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.145 [2024-12-12 20:36:47.123074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:03.145 [2024-12-12 20:36:47.123080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:26:03.145 [2024-12-12 20:36:47.123089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.145 [2024-12-12 20:36:47.124184] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 213.697 ms, result 0 00:26:04.077  [2024-12-12T20:36:49.238Z] Copying: 22/1024 [MB] (22 MBps) [2024-12-12T20:36:50.171Z] Copying: 56/1024 [MB] (33 MBps) [2024-12-12T20:36:51.544Z] Copying: 77/1024 [MB] (21 MBps) [2024-12-12T20:36:52.478Z] Copying: 98/1024 [MB] (21 MBps) [2024-12-12T20:36:53.424Z] Copying: 120/1024 [MB] (21 MBps) [2024-12-12T20:36:54.373Z] Copying: 137/1024 [MB] (17 MBps) [2024-12-12T20:36:55.305Z] Copying: 155/1024 [MB] (18 MBps) [2024-12-12T20:36:56.238Z] Copying: 176/1024 [MB] (20 MBps) [2024-12-12T20:36:57.171Z] Copying: 193/1024 [MB] (17 MBps) [2024-12-12T20:36:58.544Z] Copying: 218/1024 [MB] (24 MBps) [2024-12-12T20:36:59.477Z] Copying: 238/1024 [MB] (20 MBps) [2024-12-12T20:37:00.468Z] Copying: 265/1024 [MB] (26 MBps) [2024-12-12T20:37:01.447Z] Copying: 285/1024 [MB] (20 MBps) [2024-12-12T20:37:02.379Z] Copying: 303/1024 [MB] (17 MBps) [2024-12-12T20:37:03.313Z] Copying: 317/1024 [MB] (14 MBps) [2024-12-12T20:37:04.245Z] Copying: 370/1024 [MB] (53 MBps) [2024-12-12T20:37:05.176Z] Copying: 403/1024 [MB] (33 MBps) [2024-12-12T20:37:06.547Z] Copying: 430/1024 [MB] (26 MBps) [2024-12-12T20:37:07.481Z] Copying: 455/1024 [MB] (24 MBps) [2024-12-12T20:37:08.424Z] Copying: 475/1024 [MB] (19 MBps) [2024-12-12T20:37:09.356Z] Copying: 489/1024 [MB] (14 MBps) [2024-12-12T20:37:10.289Z] Copying: 503/1024 [MB] (13 MBps) [2024-12-12T20:37:11.222Z] Copying: 536/1024 [MB] (33 MBps) [2024-12-12T20:37:12.154Z] Copying: 562/1024 [MB] (26 MBps) [2024-12-12T20:37:13.526Z] Copying: 589/1024 [MB] (26 MBps) [2024-12-12T20:37:14.458Z] Copying: 618/1024 [MB] (29 MBps) [2024-12-12T20:37:15.431Z] Copying: 645/1024 [MB] (26 MBps) [2024-12-12T20:37:16.365Z] Copying: 680/1024 [MB] (35 MBps) [2024-12-12T20:37:17.298Z] Copying: 699/1024 [MB] (18 MBps) [2024-12-12T20:37:18.232Z] Copying: 716/1024 [MB] (16 MBps) [2024-12-12T20:37:19.165Z] Copying: 730/1024 [MB] (14 MBps) [2024-12-12T20:37:20.548Z] Copying: 749/1024 [MB] (18 MBps) [2024-12-12T20:37:21.481Z] Copying: 772/1024 [MB] (22 MBps) [2024-12-12T20:37:22.445Z] Copying: 795/1024 [MB] (23 MBps) [2024-12-12T20:37:23.426Z] Copying: 818/1024 [MB] (23 MBps) [2024-12-12T20:37:24.376Z] Copying: 840/1024 [MB] (21 MBps) [2024-12-12T20:37:25.313Z] Copying: 872/1024 [MB] (32 MBps) [2024-12-12T20:37:26.250Z] Copying: 919/1024 [MB] (46 MBps) [2024-12-12T20:37:27.183Z] Copying: 939/1024 [MB] (20 MBps) [2024-12-12T20:37:28.556Z] Copying: 958/1024 [MB] (19 MBps) [2024-12-12T20:37:29.493Z] Copying: 980/1024 [MB] (22 MBps) [2024-12-12T20:37:30.431Z] Copying: 1004/1024 [MB] (23 MBps) [2024-12-12T20:37:31.362Z] Copying: 1019/1024 [MB] (15 MBps) [2024-12-12T20:37:31.680Z] Copying: 1048340/1048576 [kB] (3892 kBps) [2024-12-12T20:37:31.680Z] Copying: 1024/1024 [MB] (average 23 MBps)[2024-12-12 20:37:31.370609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.452 [2024-12-12 20:37:31.370661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:47.452 [2024-12-12 20:37:31.370676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:47.452 [2024-12-12 20:37:31.370684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.452 [2024-12-12 20:37:31.370705] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:47.452 [2024-12-12 20:37:31.373303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.452 [2024-12-12 20:37:31.373334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:47.452 [2024-12-12 20:37:31.373348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.584 ms 00:26:47.452 [2024-12-12 20:37:31.373356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.452 [2024-12-12 20:37:31.383036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.452 [2024-12-12 20:37:31.383073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:47.452 [2024-12-12 20:37:31.383082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.885 ms 00:26:47.452 [2024-12-12 20:37:31.383089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.452 [2024-12-12 20:37:31.408078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.452 [2024-12-12 20:37:31.408112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:47.452 [2024-12-12 20:37:31.408122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.973 ms 00:26:47.452 [2024-12-12 20:37:31.408130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.452 [2024-12-12 20:37:31.414216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.452 [2024-12-12 20:37:31.414244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:47.452 [2024-12-12 20:37:31.414254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.058 ms 00:26:47.452 [2024-12-12 20:37:31.414263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.452 [2024-12-12 20:37:31.438393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.452 [2024-12-12 20:37:31.438439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:47.452 [2024-12-12 20:37:31.438449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.099 ms 00:26:47.452 [2024-12-12 20:37:31.438456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.452 [2024-12-12 20:37:31.452631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.452 [2024-12-12 20:37:31.452668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:47.452 [2024-12-12 20:37:31.452681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.143 ms 00:26:47.452 [2024-12-12 20:37:31.452690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.711 [2024-12-12 20:37:31.741661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.711 [2024-12-12 20:37:31.741733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:47.711 [2024-12-12 20:37:31.741747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 288.932 ms 00:26:47.711 [2024-12-12 20:37:31.741756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.711 [2024-12-12 20:37:31.766384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.711 [2024-12-12 20:37:31.766441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:47.711 [2024-12-12 20:37:31.766452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.610 ms 00:26:47.711 [2024-12-12 20:37:31.766468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.711 [2024-12-12 20:37:31.789751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.711 [2024-12-12 20:37:31.789784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:47.711 [2024-12-12 20:37:31.789796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.249 ms 00:26:47.711 [2024-12-12 20:37:31.789803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.711 [2024-12-12 20:37:31.812600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.711 [2024-12-12 20:37:31.812635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:47.711 [2024-12-12 20:37:31.812645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.765 ms 00:26:47.711 [2024-12-12 20:37:31.812653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.711 [2024-12-12 20:37:31.835528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.711 [2024-12-12 20:37:31.835566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:47.711 [2024-12-12 20:37:31.835577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.820 ms 00:26:47.711 [2024-12-12 20:37:31.835584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.711 [2024-12-12 20:37:31.835618] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:47.711 [2024-12-12 20:37:31.835633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 107520 / 261120 wr_cnt: 1 state: open 00:26:47.711 [2024-12-12 20:37:31.835643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:47.711 [2024-12-12 20:37:31.835651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:47.711 [2024-12-12 20:37:31.835659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:47.711 [2024-12-12 20:37:31.835666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:47.711 [2024-12-12 20:37:31.835674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:47.711 [2024-12-12 20:37:31.835681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:47.711 [2024-12-12 20:37:31.835688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:47.711 [2024-12-12 20:37:31.835696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:47.711 [2024-12-12 20:37:31.835703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:47.711 [2024-12-12 20:37:31.835712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:47.711 [2024-12-12 20:37:31.835719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:47.711 [2024-12-12 20:37:31.835727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:47.711 [2024-12-12 20:37:31.835734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:47.711 [2024-12-12 20:37:31.835741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:47.711 [2024-12-12 20:37:31.835748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:47.711 [2024-12-12 20:37:31.835755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:47.711 [2024-12-12 20:37:31.835763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:47.711 [2024-12-12 20:37:31.835770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:47.712 [2024-12-12 20:37:31.835777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:47.712 [2024-12-12 20:37:31.835784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:47.712 [2024-12-12 20:37:31.835793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:47.712 [2024-12-12 20:37:31.835800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:47.712 [2024-12-12 20:37:31.835808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:47.712 [2024-12-12 20:37:31.835815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:47.712 [2024-12-12 20:37:31.835822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:47.712 [2024-12-12 20:37:31.835829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:47.712 [2024-12-12 20:37:31.835836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:47.712 [2024-12-12 20:37:31.835843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:47.712 [2024-12-12 20:37:31.835852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:47.712 [2024-12-12 20:37:31.835860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:47.712 [2024-12-12 20:37:31.835867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:47.712 [2024-12-12 20:37:31.835874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:47.712 [2024-12-12 20:37:31.835881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:47.712 [2024-12-12 20:37:31.835888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:47.712 [2024-12-12 20:37:31.835895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:47.712 [2024-12-12 20:37:31.835903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:47.712 [2024-12-12 20:37:31.835910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:47.712 [2024-12-12 20:37:31.835917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:47.712 [2024-12-12 20:37:31.835924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:47.712 [2024-12-12 20:37:31.835931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:47.712 [2024-12-12 20:37:31.835939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:47.712 [2024-12-12 20:37:31.835946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:47.712 [2024-12-12 20:37:31.835953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:47.712 [2024-12-12 20:37:31.835960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:47.712 [2024-12-12 20:37:31.835967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:47.712 [2024-12-12 20:37:31.835974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:47.712 [2024-12-12 20:37:31.835981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:47.712 [2024-12-12 20:37:31.835988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:47.712 [2024-12-12 20:37:31.835996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:47.712 [2024-12-12 20:37:31.836003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:47.712 [2024-12-12 20:37:31.836010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:47.712 [2024-12-12 20:37:31.836018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:47.712 [2024-12-12 20:37:31.836025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:47.712 [2024-12-12 20:37:31.836032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:47.712 [2024-12-12 20:37:31.836065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:47.712 [2024-12-12 20:37:31.836073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:47.712 [2024-12-12 20:37:31.836080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:47.712 [2024-12-12 20:37:31.836088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:47.712 [2024-12-12 20:37:31.836095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:47.712 [2024-12-12 20:37:31.836103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:47.712 [2024-12-12 20:37:31.836110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:47.712 [2024-12-12 20:37:31.836118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:47.712 [2024-12-12 20:37:31.836125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:47.712 [2024-12-12 20:37:31.836132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:47.712 [2024-12-12 20:37:31.836140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:47.712 [2024-12-12 20:37:31.836147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:47.712 [2024-12-12 20:37:31.836154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:47.712 [2024-12-12 20:37:31.836162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:47.712 [2024-12-12 20:37:31.836169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:47.712 [2024-12-12 20:37:31.836176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:47.712 [2024-12-12 20:37:31.836183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:47.712 [2024-12-12 20:37:31.836190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:47.712 [2024-12-12 20:37:31.836198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:47.712 [2024-12-12 20:37:31.836205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:47.712 [2024-12-12 20:37:31.836212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:47.712 [2024-12-12 20:37:31.836219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:47.712 [2024-12-12 20:37:31.836226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:47.712 [2024-12-12 20:37:31.836234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:47.712 [2024-12-12 20:37:31.836241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:47.712 [2024-12-12 20:37:31.836248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:47.712 [2024-12-12 20:37:31.836255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:47.712 [2024-12-12 20:37:31.836262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:47.712 [2024-12-12 20:37:31.836269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:47.712 [2024-12-12 20:37:31.836276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:47.712 [2024-12-12 20:37:31.836284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:47.712 [2024-12-12 20:37:31.836291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:47.712 [2024-12-12 20:37:31.836298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:47.712 [2024-12-12 20:37:31.836306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:47.712 [2024-12-12 20:37:31.836313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:47.712 [2024-12-12 20:37:31.836320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:47.712 [2024-12-12 20:37:31.836327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:47.712 [2024-12-12 20:37:31.836335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:47.712 [2024-12-12 20:37:31.836342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:47.712 [2024-12-12 20:37:31.836350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:47.712 [2024-12-12 20:37:31.836357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:47.712 [2024-12-12 20:37:31.836364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:47.712 [2024-12-12 20:37:31.836371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:47.712 [2024-12-12 20:37:31.836378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:47.712 [2024-12-12 20:37:31.836386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:47.712 [2024-12-12 20:37:31.836402] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:47.712 [2024-12-12 20:37:31.836429] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: fcd2b5d7-4ee6-453c-9330-4be43066ede6 00:26:47.712 [2024-12-12 20:37:31.836444] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 107520 00:26:47.712 [2024-12-12 20:37:31.836451] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 108480 00:26:47.712 [2024-12-12 20:37:31.836459] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 107520 00:26:47.712 [2024-12-12 20:37:31.836467] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0089 00:26:47.712 [2024-12-12 20:37:31.836473] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:47.712 [2024-12-12 20:37:31.836481] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:47.712 [2024-12-12 20:37:31.836488] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:47.712 [2024-12-12 20:37:31.836495] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:47.713 [2024-12-12 20:37:31.836501] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:47.713 [2024-12-12 20:37:31.836507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.713 [2024-12-12 20:37:31.836515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:47.713 [2024-12-12 20:37:31.836523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.890 ms 00:26:47.713 [2024-12-12 20:37:31.836529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.713 [2024-12-12 20:37:31.849096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.713 [2024-12-12 20:37:31.849128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:47.713 [2024-12-12 20:37:31.849138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.532 ms 00:26:47.713 [2024-12-12 20:37:31.849146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.713 [2024-12-12 20:37:31.849511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.713 [2024-12-12 20:37:31.849526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:47.713 [2024-12-12 20:37:31.849539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.335 ms 00:26:47.713 [2024-12-12 20:37:31.849547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.713 [2024-12-12 20:37:31.882084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:47.713 [2024-12-12 20:37:31.882120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:47.713 [2024-12-12 20:37:31.882130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:47.713 [2024-12-12 20:37:31.882138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.713 [2024-12-12 20:37:31.882193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:47.713 [2024-12-12 20:37:31.882201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:47.713 [2024-12-12 20:37:31.882213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:47.713 [2024-12-12 20:37:31.882220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.713 [2024-12-12 20:37:31.882271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:47.713 [2024-12-12 20:37:31.882281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:47.713 [2024-12-12 20:37:31.882289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:47.713 [2024-12-12 20:37:31.882295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.713 [2024-12-12 20:37:31.882310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:47.713 [2024-12-12 20:37:31.882317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:47.713 [2024-12-12 20:37:31.882324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:47.713 [2024-12-12 20:37:31.882334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.970 [2024-12-12 20:37:31.959503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:47.970 [2024-12-12 20:37:31.959549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:47.970 [2024-12-12 20:37:31.959560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:47.970 [2024-12-12 20:37:31.959567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.970 [2024-12-12 20:37:32.023249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:47.971 [2024-12-12 20:37:32.023295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:47.971 [2024-12-12 20:37:32.023310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:47.971 [2024-12-12 20:37:32.023317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.971 [2024-12-12 20:37:32.023387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:47.971 [2024-12-12 20:37:32.023397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:47.971 [2024-12-12 20:37:32.023405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:47.971 [2024-12-12 20:37:32.023428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.971 [2024-12-12 20:37:32.023461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:47.971 [2024-12-12 20:37:32.023470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:47.971 [2024-12-12 20:37:32.023477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:47.971 [2024-12-12 20:37:32.023484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.971 [2024-12-12 20:37:32.023575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:47.971 [2024-12-12 20:37:32.023585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:47.971 [2024-12-12 20:37:32.023592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:47.971 [2024-12-12 20:37:32.023600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.971 [2024-12-12 20:37:32.023630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:47.971 [2024-12-12 20:37:32.023639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:47.971 [2024-12-12 20:37:32.023646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:47.971 [2024-12-12 20:37:32.023654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.971 [2024-12-12 20:37:32.023691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:47.971 [2024-12-12 20:37:32.023700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:47.971 [2024-12-12 20:37:32.023707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:47.971 [2024-12-12 20:37:32.023714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.971 [2024-12-12 20:37:32.023753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:47.971 [2024-12-12 20:37:32.023762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:47.971 [2024-12-12 20:37:32.023770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:47.971 [2024-12-12 20:37:32.023777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.971 [2024-12-12 20:37:32.023886] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 653.260 ms, result 0 00:26:49.343 00:26:49.343 00:26:49.343 20:37:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:26:51.870 20:37:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:51.870 [2024-12-12 20:37:35.592115] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:26:51.870 [2024-12-12 20:37:35.592237] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82690 ] 00:26:51.870 [2024-12-12 20:37:35.752946] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:51.870 [2024-12-12 20:37:35.848441] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:26:52.128 [2024-12-12 20:37:36.110650] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:52.128 [2024-12-12 20:37:36.110717] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:52.128 [2024-12-12 20:37:36.267921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.128 [2024-12-12 20:37:36.267972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:52.128 [2024-12-12 20:37:36.267984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:52.128 [2024-12-12 20:37:36.267992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.128 [2024-12-12 20:37:36.268036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.128 [2024-12-12 20:37:36.268047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:52.129 [2024-12-12 20:37:36.268055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:26:52.129 [2024-12-12 20:37:36.268062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.129 [2024-12-12 20:37:36.268080] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:52.129 [2024-12-12 20:37:36.268766] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:52.129 [2024-12-12 20:37:36.268787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.129 [2024-12-12 20:37:36.268795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:52.129 [2024-12-12 20:37:36.268803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.713 ms 00:26:52.129 [2024-12-12 20:37:36.268811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.129 [2024-12-12 20:37:36.269856] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:52.129 [2024-12-12 20:37:36.282505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.129 [2024-12-12 20:37:36.282539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:52.129 [2024-12-12 20:37:36.282551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.650 ms 00:26:52.129 [2024-12-12 20:37:36.282559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.129 [2024-12-12 20:37:36.282613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.129 [2024-12-12 20:37:36.282623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:52.129 [2024-12-12 20:37:36.282631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:26:52.129 [2024-12-12 20:37:36.282638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.129 [2024-12-12 20:37:36.287473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.129 [2024-12-12 20:37:36.287503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:52.129 [2024-12-12 20:37:36.287513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.785 ms 00:26:52.129 [2024-12-12 20:37:36.287524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.129 [2024-12-12 20:37:36.287589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.129 [2024-12-12 20:37:36.287598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:52.129 [2024-12-12 20:37:36.287606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:26:52.129 [2024-12-12 20:37:36.287613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.129 [2024-12-12 20:37:36.287661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.129 [2024-12-12 20:37:36.287671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:52.129 [2024-12-12 20:37:36.287679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:52.129 [2024-12-12 20:37:36.287686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.129 [2024-12-12 20:37:36.287709] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:52.129 [2024-12-12 20:37:36.290865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.129 [2024-12-12 20:37:36.290893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:52.129 [2024-12-12 20:37:36.290904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.160 ms 00:26:52.129 [2024-12-12 20:37:36.290911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.129 [2024-12-12 20:37:36.290940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.129 [2024-12-12 20:37:36.290948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:52.129 [2024-12-12 20:37:36.290956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:26:52.129 [2024-12-12 20:37:36.290963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.129 [2024-12-12 20:37:36.290981] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:52.129 [2024-12-12 20:37:36.291000] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:52.129 [2024-12-12 20:37:36.291034] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:52.129 [2024-12-12 20:37:36.291051] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:52.129 [2024-12-12 20:37:36.291151] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:52.129 [2024-12-12 20:37:36.291161] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:52.129 [2024-12-12 20:37:36.291172] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:52.129 [2024-12-12 20:37:36.291181] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:52.129 [2024-12-12 20:37:36.291189] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:52.129 [2024-12-12 20:37:36.291196] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:52.129 [2024-12-12 20:37:36.291204] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:52.129 [2024-12-12 20:37:36.291211] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:52.129 [2024-12-12 20:37:36.291220] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:52.129 [2024-12-12 20:37:36.291228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.129 [2024-12-12 20:37:36.291235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:52.129 [2024-12-12 20:37:36.291243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.249 ms 00:26:52.129 [2024-12-12 20:37:36.291250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.129 [2024-12-12 20:37:36.291331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.129 [2024-12-12 20:37:36.291339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:52.129 [2024-12-12 20:37:36.291346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:26:52.129 [2024-12-12 20:37:36.291352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.129 [2024-12-12 20:37:36.291463] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:52.129 [2024-12-12 20:37:36.291473] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:52.129 [2024-12-12 20:37:36.291481] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:52.129 [2024-12-12 20:37:36.291489] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:52.129 [2024-12-12 20:37:36.291497] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:52.129 [2024-12-12 20:37:36.291503] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:52.129 [2024-12-12 20:37:36.291510] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:52.129 [2024-12-12 20:37:36.291517] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:52.129 [2024-12-12 20:37:36.291524] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:52.129 [2024-12-12 20:37:36.291530] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:52.129 [2024-12-12 20:37:36.291537] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:52.129 [2024-12-12 20:37:36.291545] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:52.129 [2024-12-12 20:37:36.291552] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:52.129 [2024-12-12 20:37:36.291566] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:52.129 [2024-12-12 20:37:36.291572] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:52.129 [2024-12-12 20:37:36.291579] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:52.129 [2024-12-12 20:37:36.291586] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:52.129 [2024-12-12 20:37:36.291593] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:52.129 [2024-12-12 20:37:36.291599] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:52.129 [2024-12-12 20:37:36.291606] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:52.129 [2024-12-12 20:37:36.291612] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:52.129 [2024-12-12 20:37:36.291619] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:52.129 [2024-12-12 20:37:36.291625] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:52.129 [2024-12-12 20:37:36.291632] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:52.129 [2024-12-12 20:37:36.291638] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:52.129 [2024-12-12 20:37:36.291645] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:52.129 [2024-12-12 20:37:36.291651] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:52.129 [2024-12-12 20:37:36.291657] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:52.129 [2024-12-12 20:37:36.291664] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:52.129 [2024-12-12 20:37:36.291670] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:52.129 [2024-12-12 20:37:36.291676] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:52.129 [2024-12-12 20:37:36.291682] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:52.129 [2024-12-12 20:37:36.291688] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:52.129 [2024-12-12 20:37:36.291695] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:52.129 [2024-12-12 20:37:36.291701] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:52.129 [2024-12-12 20:37:36.291708] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:52.129 [2024-12-12 20:37:36.291714] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:52.129 [2024-12-12 20:37:36.291720] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:52.129 [2024-12-12 20:37:36.291727] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:52.129 [2024-12-12 20:37:36.291733] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:52.129 [2024-12-12 20:37:36.291740] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:52.129 [2024-12-12 20:37:36.291746] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:52.129 [2024-12-12 20:37:36.291752] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:52.129 [2024-12-12 20:37:36.291759] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:52.129 [2024-12-12 20:37:36.291768] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:52.129 [2024-12-12 20:37:36.291776] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:52.129 [2024-12-12 20:37:36.291783] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:52.129 [2024-12-12 20:37:36.291790] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:52.129 [2024-12-12 20:37:36.291797] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:52.130 [2024-12-12 20:37:36.291804] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:52.130 [2024-12-12 20:37:36.291810] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:52.130 [2024-12-12 20:37:36.291816] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:52.130 [2024-12-12 20:37:36.291823] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:52.130 [2024-12-12 20:37:36.291831] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:52.130 [2024-12-12 20:37:36.291839] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:52.130 [2024-12-12 20:37:36.291850] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:52.130 [2024-12-12 20:37:36.291858] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:52.130 [2024-12-12 20:37:36.291865] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:52.130 [2024-12-12 20:37:36.291871] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:52.130 [2024-12-12 20:37:36.291878] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:52.130 [2024-12-12 20:37:36.291885] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:52.130 [2024-12-12 20:37:36.291892] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:52.130 [2024-12-12 20:37:36.291899] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:52.130 [2024-12-12 20:37:36.291905] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:52.130 [2024-12-12 20:37:36.291912] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:52.130 [2024-12-12 20:37:36.291920] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:52.130 [2024-12-12 20:37:36.291926] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:52.130 [2024-12-12 20:37:36.291933] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:52.130 [2024-12-12 20:37:36.291940] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:52.130 [2024-12-12 20:37:36.291947] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:52.130 [2024-12-12 20:37:36.291956] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:52.130 [2024-12-12 20:37:36.291964] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:52.130 [2024-12-12 20:37:36.291971] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:52.130 [2024-12-12 20:37:36.291977] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:52.130 [2024-12-12 20:37:36.291985] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:52.130 [2024-12-12 20:37:36.291992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.130 [2024-12-12 20:37:36.291999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:52.130 [2024-12-12 20:37:36.292007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.610 ms 00:26:52.130 [2024-12-12 20:37:36.292014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.130 [2024-12-12 20:37:36.317458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.130 [2024-12-12 20:37:36.317490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:52.130 [2024-12-12 20:37:36.317499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.393 ms 00:26:52.130 [2024-12-12 20:37:36.317509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.130 [2024-12-12 20:37:36.317591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.130 [2024-12-12 20:37:36.317599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:52.130 [2024-12-12 20:37:36.317607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:26:52.130 [2024-12-12 20:37:36.317613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.388 [2024-12-12 20:37:36.357466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.388 [2024-12-12 20:37:36.357506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:52.388 [2024-12-12 20:37:36.357518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.805 ms 00:26:52.388 [2024-12-12 20:37:36.357526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.388 [2024-12-12 20:37:36.357566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.388 [2024-12-12 20:37:36.357575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:52.388 [2024-12-12 20:37:36.357586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:52.388 [2024-12-12 20:37:36.357594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.388 [2024-12-12 20:37:36.357944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.389 [2024-12-12 20:37:36.357984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:52.389 [2024-12-12 20:37:36.357993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.301 ms 00:26:52.389 [2024-12-12 20:37:36.358000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.389 [2024-12-12 20:37:36.358124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.389 [2024-12-12 20:37:36.358133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:52.389 [2024-12-12 20:37:36.358141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:26:52.389 [2024-12-12 20:37:36.358151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.389 [2024-12-12 20:37:36.371102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.389 [2024-12-12 20:37:36.371135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:52.389 [2024-12-12 20:37:36.371147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.932 ms 00:26:52.389 [2024-12-12 20:37:36.371154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.389 [2024-12-12 20:37:36.383708] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:26:52.389 [2024-12-12 20:37:36.383743] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:52.389 [2024-12-12 20:37:36.383754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.389 [2024-12-12 20:37:36.383762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:52.389 [2024-12-12 20:37:36.383770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.498 ms 00:26:52.389 [2024-12-12 20:37:36.383777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.389 [2024-12-12 20:37:36.408033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.389 [2024-12-12 20:37:36.408069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:52.389 [2024-12-12 20:37:36.408078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.220 ms 00:26:52.389 [2024-12-12 20:37:36.408086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.389 [2024-12-12 20:37:36.419513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.389 [2024-12-12 20:37:36.419542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:52.389 [2024-12-12 20:37:36.419551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.393 ms 00:26:52.389 [2024-12-12 20:37:36.419558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.389 [2024-12-12 20:37:36.430808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.389 [2024-12-12 20:37:36.430841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:52.389 [2024-12-12 20:37:36.430851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.219 ms 00:26:52.389 [2024-12-12 20:37:36.430858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.389 [2024-12-12 20:37:36.431461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.389 [2024-12-12 20:37:36.431484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:52.389 [2024-12-12 20:37:36.431495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.526 ms 00:26:52.389 [2024-12-12 20:37:36.431503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.389 [2024-12-12 20:37:36.487287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.389 [2024-12-12 20:37:36.487339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:52.389 [2024-12-12 20:37:36.487357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.766 ms 00:26:52.389 [2024-12-12 20:37:36.487365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.389 [2024-12-12 20:37:36.497669] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:52.389 [2024-12-12 20:37:36.499956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.389 [2024-12-12 20:37:36.499986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:52.389 [2024-12-12 20:37:36.499996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.547 ms 00:26:52.389 [2024-12-12 20:37:36.500004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.389 [2024-12-12 20:37:36.500086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.389 [2024-12-12 20:37:36.500096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:52.389 [2024-12-12 20:37:36.500105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:26:52.389 [2024-12-12 20:37:36.500115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.389 [2024-12-12 20:37:36.501393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.389 [2024-12-12 20:37:36.501439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:52.389 [2024-12-12 20:37:36.501450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.240 ms 00:26:52.389 [2024-12-12 20:37:36.501458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.389 [2024-12-12 20:37:36.501483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.389 [2024-12-12 20:37:36.501492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:52.389 [2024-12-12 20:37:36.501501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:52.389 [2024-12-12 20:37:36.501509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.389 [2024-12-12 20:37:36.501545] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:52.389 [2024-12-12 20:37:36.501556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.389 [2024-12-12 20:37:36.501564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:52.389 [2024-12-12 20:37:36.501573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:26:52.389 [2024-12-12 20:37:36.501581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.389 [2024-12-12 20:37:36.524656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.389 [2024-12-12 20:37:36.524694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:52.389 [2024-12-12 20:37:36.524709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.058 ms 00:26:52.389 [2024-12-12 20:37:36.524716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.389 [2024-12-12 20:37:36.524780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.389 [2024-12-12 20:37:36.524789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:52.389 [2024-12-12 20:37:36.524797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:26:52.389 [2024-12-12 20:37:36.524804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.389 [2024-12-12 20:37:36.525752] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 257.413 ms, result 0 00:26:53.763  [2024-12-12T20:37:38.925Z] Copying: 988/1048576 [kB] (988 kBps) [2024-12-12T20:37:39.859Z] Copying: 4540/1048576 [kB] (3552 kBps) [2024-12-12T20:37:40.794Z] Copying: 16/1024 [MB] (12 MBps) [2024-12-12T20:37:41.728Z] Copying: 34/1024 [MB] (17 MBps) [2024-12-12T20:37:43.102Z] Copying: 52/1024 [MB] (17 MBps) [2024-12-12T20:37:44.036Z] Copying: 70/1024 [MB] (18 MBps) [2024-12-12T20:37:44.970Z] Copying: 90/1024 [MB] (19 MBps) [2024-12-12T20:37:45.910Z] Copying: 109/1024 [MB] (19 MBps) [2024-12-12T20:37:46.843Z] Copying: 128/1024 [MB] (19 MBps) [2024-12-12T20:37:47.777Z] Copying: 146/1024 [MB] (18 MBps) [2024-12-12T20:37:48.739Z] Copying: 164/1024 [MB] (17 MBps) [2024-12-12T20:37:50.114Z] Copying: 184/1024 [MB] (20 MBps) [2024-12-12T20:37:51.047Z] Copying: 202/1024 [MB] (17 MBps) [2024-12-12T20:37:51.982Z] Copying: 219/1024 [MB] (17 MBps) [2024-12-12T20:37:52.917Z] Copying: 236/1024 [MB] (17 MBps) [2024-12-12T20:37:53.851Z] Copying: 254/1024 [MB] (17 MBps) [2024-12-12T20:37:54.785Z] Copying: 272/1024 [MB] (17 MBps) [2024-12-12T20:37:55.719Z] Copying: 290/1024 [MB] (18 MBps) [2024-12-12T20:37:57.135Z] Copying: 309/1024 [MB] (18 MBps) [2024-12-12T20:37:58.070Z] Copying: 326/1024 [MB] (17 MBps) [2024-12-12T20:37:59.006Z] Copying: 346/1024 [MB] (19 MBps) [2024-12-12T20:37:59.941Z] Copying: 377/1024 [MB] (30 MBps) [2024-12-12T20:38:00.875Z] Copying: 408/1024 [MB] (31 MBps) [2024-12-12T20:38:01.809Z] Copying: 439/1024 [MB] (31 MBps) [2024-12-12T20:38:02.743Z] Copying: 466/1024 [MB] (26 MBps) [2024-12-12T20:38:04.115Z] Copying: 496/1024 [MB] (30 MBps) [2024-12-12T20:38:05.079Z] Copying: 531/1024 [MB] (34 MBps) [2024-12-12T20:38:06.024Z] Copying: 549/1024 [MB] (18 MBps) [2024-12-12T20:38:06.960Z] Copying: 576/1024 [MB] (26 MBps) [2024-12-12T20:38:07.893Z] Copying: 601/1024 [MB] (24 MBps) [2024-12-12T20:38:08.827Z] Copying: 624/1024 [MB] (22 MBps) [2024-12-12T20:38:09.761Z] Copying: 657/1024 [MB] (32 MBps) [2024-12-12T20:38:11.133Z] Copying: 687/1024 [MB] (30 MBps) [2024-12-12T20:38:12.068Z] Copying: 725/1024 [MB] (37 MBps) [2024-12-12T20:38:13.003Z] Copying: 744/1024 [MB] (19 MBps) [2024-12-12T20:38:13.936Z] Copying: 764/1024 [MB] (19 MBps) [2024-12-12T20:38:14.912Z] Copying: 784/1024 [MB] (19 MBps) [2024-12-12T20:38:15.851Z] Copying: 803/1024 [MB] (19 MBps) [2024-12-12T20:38:16.784Z] Copying: 822/1024 [MB] (18 MBps) [2024-12-12T20:38:17.718Z] Copying: 841/1024 [MB] (19 MBps) [2024-12-12T20:38:19.091Z] Copying: 861/1024 [MB] (19 MBps) [2024-12-12T20:38:20.024Z] Copying: 882/1024 [MB] (21 MBps) [2024-12-12T20:38:20.956Z] Copying: 903/1024 [MB] (21 MBps) [2024-12-12T20:38:21.890Z] Copying: 925/1024 [MB] (21 MBps) [2024-12-12T20:38:22.824Z] Copying: 947/1024 [MB] (21 MBps) [2024-12-12T20:38:23.756Z] Copying: 967/1024 [MB] (19 MBps) [2024-12-12T20:38:25.129Z] Copying: 987/1024 [MB] (19 MBps) [2024-12-12T20:38:25.692Z] Copying: 1007/1024 [MB] (19 MBps) [2024-12-12T20:38:26.259Z] Copying: 1024/1024 [MB] (average 20 MBps)[2024-12-12 20:38:25.983202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.032 [2024-12-12 20:38:25.983268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:42.032 [2024-12-12 20:38:25.983285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:42.032 [2024-12-12 20:38:25.983295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.032 [2024-12-12 20:38:25.983321] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:42.032 [2024-12-12 20:38:25.986818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.032 [2024-12-12 20:38:25.986851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:42.032 [2024-12-12 20:38:25.986861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.478 ms 00:27:42.032 [2024-12-12 20:38:25.986869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.032 [2024-12-12 20:38:25.987087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.032 [2024-12-12 20:38:25.987102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:42.032 [2024-12-12 20:38:25.987111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.194 ms 00:27:42.032 [2024-12-12 20:38:25.987118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.032 [2024-12-12 20:38:25.999588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.032 [2024-12-12 20:38:25.999626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:42.032 [2024-12-12 20:38:25.999637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.454 ms 00:27:42.032 [2024-12-12 20:38:25.999645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.032 [2024-12-12 20:38:26.006569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.032 [2024-12-12 20:38:26.006606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:42.032 [2024-12-12 20:38:26.006621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.894 ms 00:27:42.032 [2024-12-12 20:38:26.006631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.032 [2024-12-12 20:38:26.030771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.032 [2024-12-12 20:38:26.030805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:42.032 [2024-12-12 20:38:26.030816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.093 ms 00:27:42.032 [2024-12-12 20:38:26.030822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.032 [2024-12-12 20:38:26.044664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.032 [2024-12-12 20:38:26.044697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:42.032 [2024-12-12 20:38:26.044707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.810 ms 00:27:42.032 [2024-12-12 20:38:26.044714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.032 [2024-12-12 20:38:26.048299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.032 [2024-12-12 20:38:26.048332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:42.032 [2024-12-12 20:38:26.048341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.550 ms 00:27:42.032 [2024-12-12 20:38:26.048353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.032 [2024-12-12 20:38:26.072126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.032 [2024-12-12 20:38:26.072159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:42.032 [2024-12-12 20:38:26.072168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.759 ms 00:27:42.032 [2024-12-12 20:38:26.072175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.032 [2024-12-12 20:38:26.095303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.032 [2024-12-12 20:38:26.095335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:42.032 [2024-12-12 20:38:26.095345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.098 ms 00:27:42.032 [2024-12-12 20:38:26.095352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.032 [2024-12-12 20:38:26.118276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.032 [2024-12-12 20:38:26.118307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:42.032 [2024-12-12 20:38:26.118317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.894 ms 00:27:42.032 [2024-12-12 20:38:26.118324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.032 [2024-12-12 20:38:26.140890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.032 [2024-12-12 20:38:26.140921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:42.032 [2024-12-12 20:38:26.140931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.515 ms 00:27:42.032 [2024-12-12 20:38:26.140937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.032 [2024-12-12 20:38:26.140967] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:42.032 [2024-12-12 20:38:26.140981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:42.032 [2024-12-12 20:38:26.140990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:27:42.032 [2024-12-12 20:38:26.140998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:42.032 [2024-12-12 20:38:26.141006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:42.032 [2024-12-12 20:38:26.141013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:42.032 [2024-12-12 20:38:26.141021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:42.032 [2024-12-12 20:38:26.141028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:42.032 [2024-12-12 20:38:26.141035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:42.032 [2024-12-12 20:38:26.141043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:42.032 [2024-12-12 20:38:26.141050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:42.032 [2024-12-12 20:38:26.141057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:42.032 [2024-12-12 20:38:26.141064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:42.032 [2024-12-12 20:38:26.141072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:42.032 [2024-12-12 20:38:26.141080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:42.032 [2024-12-12 20:38:26.141087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:42.032 [2024-12-12 20:38:26.141094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:42.032 [2024-12-12 20:38:26.141102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:42.032 [2024-12-12 20:38:26.141109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:42.032 [2024-12-12 20:38:26.141115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:42.032 [2024-12-12 20:38:26.141123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:42.032 [2024-12-12 20:38:26.141130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:42.032 [2024-12-12 20:38:26.141137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:42.032 [2024-12-12 20:38:26.141144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:42.032 [2024-12-12 20:38:26.141151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:42.032 [2024-12-12 20:38:26.141158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:42.032 [2024-12-12 20:38:26.141165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:42.032 [2024-12-12 20:38:26.141172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:42.032 [2024-12-12 20:38:26.141179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:42.032 [2024-12-12 20:38:26.141186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:42.032 [2024-12-12 20:38:26.141194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:42.032 [2024-12-12 20:38:26.141201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:42.032 [2024-12-12 20:38:26.141208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:42.032 [2024-12-12 20:38:26.141215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:42.032 [2024-12-12 20:38:26.141222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:42.032 [2024-12-12 20:38:26.141229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:42.032 [2024-12-12 20:38:26.141236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:42.032 [2024-12-12 20:38:26.141244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:42.032 [2024-12-12 20:38:26.141250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:42.032 [2024-12-12 20:38:26.141257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:42.032 [2024-12-12 20:38:26.141264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:42.032 [2024-12-12 20:38:26.141271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:42.032 [2024-12-12 20:38:26.141279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:42.032 [2024-12-12 20:38:26.141286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:42.032 [2024-12-12 20:38:26.141292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:42.032 [2024-12-12 20:38:26.141300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:42.032 [2024-12-12 20:38:26.141307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:42.032 [2024-12-12 20:38:26.141314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:42.032 [2024-12-12 20:38:26.141321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:42.033 [2024-12-12 20:38:26.141328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:42.033 [2024-12-12 20:38:26.141335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:42.033 [2024-12-12 20:38:26.141342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:42.033 [2024-12-12 20:38:26.141349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:42.033 [2024-12-12 20:38:26.141356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:42.033 [2024-12-12 20:38:26.141363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:42.033 [2024-12-12 20:38:26.141372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:42.033 [2024-12-12 20:38:26.141379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:42.033 [2024-12-12 20:38:26.141386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:42.033 [2024-12-12 20:38:26.141395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:42.033 [2024-12-12 20:38:26.141402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:42.033 [2024-12-12 20:38:26.141409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:42.033 [2024-12-12 20:38:26.141425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:42.033 [2024-12-12 20:38:26.141434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:42.033 [2024-12-12 20:38:26.141441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:42.033 [2024-12-12 20:38:26.141449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:42.033 [2024-12-12 20:38:26.141456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:42.033 [2024-12-12 20:38:26.141464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:42.033 [2024-12-12 20:38:26.141471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:42.033 [2024-12-12 20:38:26.141479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:42.033 [2024-12-12 20:38:26.141486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:42.033 [2024-12-12 20:38:26.141493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:42.033 [2024-12-12 20:38:26.141501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:42.033 [2024-12-12 20:38:26.141508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:42.033 [2024-12-12 20:38:26.141515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:42.033 [2024-12-12 20:38:26.141523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:42.033 [2024-12-12 20:38:26.141530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:42.033 [2024-12-12 20:38:26.141537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:42.033 [2024-12-12 20:38:26.141544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:42.033 [2024-12-12 20:38:26.141552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:42.033 [2024-12-12 20:38:26.141559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:42.033 [2024-12-12 20:38:26.141566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:42.033 [2024-12-12 20:38:26.141573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:42.033 [2024-12-12 20:38:26.141581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:42.033 [2024-12-12 20:38:26.141588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:42.033 [2024-12-12 20:38:26.141596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:42.033 [2024-12-12 20:38:26.141603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:42.033 [2024-12-12 20:38:26.141610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:42.033 [2024-12-12 20:38:26.141618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:42.033 [2024-12-12 20:38:26.141625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:42.033 [2024-12-12 20:38:26.141632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:42.033 [2024-12-12 20:38:26.141639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:42.033 [2024-12-12 20:38:26.141646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:42.033 [2024-12-12 20:38:26.141653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:42.033 [2024-12-12 20:38:26.141660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:42.033 [2024-12-12 20:38:26.141669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:42.033 [2024-12-12 20:38:26.141676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:42.033 [2024-12-12 20:38:26.141683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:42.033 [2024-12-12 20:38:26.141691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:42.033 [2024-12-12 20:38:26.141698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:42.033 [2024-12-12 20:38:26.141705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:42.033 [2024-12-12 20:38:26.141712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:42.033 [2024-12-12 20:38:26.141728] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:42.033 [2024-12-12 20:38:26.141735] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: fcd2b5d7-4ee6-453c-9330-4be43066ede6 00:27:42.033 [2024-12-12 20:38:26.141743] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:27:42.033 [2024-12-12 20:38:26.141750] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 157120 00:27:42.033 [2024-12-12 20:38:26.141761] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 155136 00:27:42.033 [2024-12-12 20:38:26.141769] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0128 00:27:42.033 [2024-12-12 20:38:26.141776] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:42.033 [2024-12-12 20:38:26.141789] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:42.033 [2024-12-12 20:38:26.141797] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:42.033 [2024-12-12 20:38:26.141804] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:42.033 [2024-12-12 20:38:26.141824] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:42.033 [2024-12-12 20:38:26.141831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.033 [2024-12-12 20:38:26.141838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:42.033 [2024-12-12 20:38:26.141845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.865 ms 00:27:42.033 [2024-12-12 20:38:26.141852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.033 [2024-12-12 20:38:26.154247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.033 [2024-12-12 20:38:26.154277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:42.033 [2024-12-12 20:38:26.154286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.379 ms 00:27:42.033 [2024-12-12 20:38:26.154293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.033 [2024-12-12 20:38:26.154651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.033 [2024-12-12 20:38:26.154665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:42.033 [2024-12-12 20:38:26.154673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.330 ms 00:27:42.033 [2024-12-12 20:38:26.154681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.033 [2024-12-12 20:38:26.187283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:42.033 [2024-12-12 20:38:26.187316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:42.033 [2024-12-12 20:38:26.187325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:42.033 [2024-12-12 20:38:26.187333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.033 [2024-12-12 20:38:26.187378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:42.033 [2024-12-12 20:38:26.187385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:42.033 [2024-12-12 20:38:26.187393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:42.033 [2024-12-12 20:38:26.187400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.033 [2024-12-12 20:38:26.187475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:42.033 [2024-12-12 20:38:26.187485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:42.033 [2024-12-12 20:38:26.187493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:42.033 [2024-12-12 20:38:26.187500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.033 [2024-12-12 20:38:26.187514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:42.033 [2024-12-12 20:38:26.187523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:42.033 [2024-12-12 20:38:26.187530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:42.033 [2024-12-12 20:38:26.187537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.292 [2024-12-12 20:38:26.265024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:42.292 [2024-12-12 20:38:26.265078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:42.292 [2024-12-12 20:38:26.265089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:42.292 [2024-12-12 20:38:26.265096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.292 [2024-12-12 20:38:26.328272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:42.292 [2024-12-12 20:38:26.328315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:42.292 [2024-12-12 20:38:26.328327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:42.292 [2024-12-12 20:38:26.328335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.292 [2024-12-12 20:38:26.328381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:42.292 [2024-12-12 20:38:26.328395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:42.292 [2024-12-12 20:38:26.328402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:42.292 [2024-12-12 20:38:26.328410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.292 [2024-12-12 20:38:26.328479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:42.292 [2024-12-12 20:38:26.328489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:42.292 [2024-12-12 20:38:26.328496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:42.292 [2024-12-12 20:38:26.328504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.292 [2024-12-12 20:38:26.328588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:42.292 [2024-12-12 20:38:26.328598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:42.292 [2024-12-12 20:38:26.328608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:42.292 [2024-12-12 20:38:26.328615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.292 [2024-12-12 20:38:26.328643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:42.292 [2024-12-12 20:38:26.328651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:42.292 [2024-12-12 20:38:26.328659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:42.292 [2024-12-12 20:38:26.328666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.292 [2024-12-12 20:38:26.328699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:42.292 [2024-12-12 20:38:26.328707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:42.292 [2024-12-12 20:38:26.328717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:42.292 [2024-12-12 20:38:26.328724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.292 [2024-12-12 20:38:26.328761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:42.292 [2024-12-12 20:38:26.328771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:42.292 [2024-12-12 20:38:26.328778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:42.292 [2024-12-12 20:38:26.328785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.292 [2024-12-12 20:38:26.328893] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 345.677 ms, result 0 00:27:42.859 00:27:42.859 00:27:42.859 20:38:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:45.388 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:27:45.388 20:38:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:45.388 [2024-12-12 20:38:29.225086] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:27:45.388 [2024-12-12 20:38:29.225200] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83242 ] 00:27:45.388 [2024-12-12 20:38:29.385173] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:45.388 [2024-12-12 20:38:29.481261] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:27:45.694 [2024-12-12 20:38:29.739759] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:45.694 [2024-12-12 20:38:29.739820] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:45.694 [2024-12-12 20:38:29.897455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.694 [2024-12-12 20:38:29.897500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:45.694 [2024-12-12 20:38:29.897513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:45.694 [2024-12-12 20:38:29.897520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.694 [2024-12-12 20:38:29.897566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.694 [2024-12-12 20:38:29.897579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:45.694 [2024-12-12 20:38:29.897587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:27:45.694 [2024-12-12 20:38:29.897594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.694 [2024-12-12 20:38:29.897610] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:45.694 [2024-12-12 20:38:29.898279] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:45.694 [2024-12-12 20:38:29.898295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.694 [2024-12-12 20:38:29.898303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:45.694 [2024-12-12 20:38:29.898310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.689 ms 00:27:45.694 [2024-12-12 20:38:29.898318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.694 [2024-12-12 20:38:29.899375] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:45.694 [2024-12-12 20:38:29.912142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.694 [2024-12-12 20:38:29.912172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:45.694 [2024-12-12 20:38:29.912184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.768 ms 00:27:45.694 [2024-12-12 20:38:29.912191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.694 [2024-12-12 20:38:29.912247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.694 [2024-12-12 20:38:29.912256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:45.694 [2024-12-12 20:38:29.912264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:27:45.694 [2024-12-12 20:38:29.912271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.694 [2024-12-12 20:38:29.917380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.694 [2024-12-12 20:38:29.917407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:45.694 [2024-12-12 20:38:29.917426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.061 ms 00:27:45.694 [2024-12-12 20:38:29.917437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.694 [2024-12-12 20:38:29.917502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.694 [2024-12-12 20:38:29.917510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:45.695 [2024-12-12 20:38:29.917518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:27:45.695 [2024-12-12 20:38:29.917525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.695 [2024-12-12 20:38:29.917573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.695 [2024-12-12 20:38:29.917583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:45.695 [2024-12-12 20:38:29.917590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:27:45.695 [2024-12-12 20:38:29.917598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.695 [2024-12-12 20:38:29.917621] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:45.954 [2024-12-12 20:38:29.920964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.954 [2024-12-12 20:38:29.920984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:45.954 [2024-12-12 20:38:29.920996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.347 ms 00:27:45.954 [2024-12-12 20:38:29.921003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.954 [2024-12-12 20:38:29.921032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.954 [2024-12-12 20:38:29.921040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:45.954 [2024-12-12 20:38:29.921048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:27:45.954 [2024-12-12 20:38:29.921055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.954 [2024-12-12 20:38:29.921074] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:45.954 [2024-12-12 20:38:29.921092] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:45.954 [2024-12-12 20:38:29.921125] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:45.954 [2024-12-12 20:38:29.921141] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:45.954 [2024-12-12 20:38:29.921243] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:45.954 [2024-12-12 20:38:29.921252] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:45.954 [2024-12-12 20:38:29.921262] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:45.954 [2024-12-12 20:38:29.921272] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:45.954 [2024-12-12 20:38:29.921280] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:45.954 [2024-12-12 20:38:29.921288] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:45.954 [2024-12-12 20:38:29.921295] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:45.954 [2024-12-12 20:38:29.921302] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:45.954 [2024-12-12 20:38:29.921312] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:45.954 [2024-12-12 20:38:29.921319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.954 [2024-12-12 20:38:29.921326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:45.954 [2024-12-12 20:38:29.921334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.247 ms 00:27:45.954 [2024-12-12 20:38:29.921340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.954 [2024-12-12 20:38:29.921433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.954 [2024-12-12 20:38:29.921442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:45.954 [2024-12-12 20:38:29.921449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:27:45.954 [2024-12-12 20:38:29.921455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.954 [2024-12-12 20:38:29.921564] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:45.954 [2024-12-12 20:38:29.921574] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:45.954 [2024-12-12 20:38:29.921582] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:45.954 [2024-12-12 20:38:29.921589] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:45.954 [2024-12-12 20:38:29.921596] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:45.954 [2024-12-12 20:38:29.921603] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:45.954 [2024-12-12 20:38:29.921610] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:45.954 [2024-12-12 20:38:29.921617] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:45.954 [2024-12-12 20:38:29.921624] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:45.954 [2024-12-12 20:38:29.921630] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:45.954 [2024-12-12 20:38:29.921637] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:45.954 [2024-12-12 20:38:29.921644] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:45.954 [2024-12-12 20:38:29.921650] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:45.954 [2024-12-12 20:38:29.921662] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:45.954 [2024-12-12 20:38:29.921668] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:45.954 [2024-12-12 20:38:29.921675] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:45.954 [2024-12-12 20:38:29.921683] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:45.954 [2024-12-12 20:38:29.921689] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:45.954 [2024-12-12 20:38:29.921696] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:45.954 [2024-12-12 20:38:29.921702] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:45.954 [2024-12-12 20:38:29.921709] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:45.954 [2024-12-12 20:38:29.921715] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:45.954 [2024-12-12 20:38:29.921722] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:45.954 [2024-12-12 20:38:29.921728] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:45.954 [2024-12-12 20:38:29.921734] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:45.954 [2024-12-12 20:38:29.921740] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:45.954 [2024-12-12 20:38:29.921747] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:45.954 [2024-12-12 20:38:29.921753] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:45.954 [2024-12-12 20:38:29.921760] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:45.954 [2024-12-12 20:38:29.921766] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:45.954 [2024-12-12 20:38:29.921772] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:45.954 [2024-12-12 20:38:29.921779] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:45.954 [2024-12-12 20:38:29.921785] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:45.954 [2024-12-12 20:38:29.921792] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:45.954 [2024-12-12 20:38:29.921807] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:45.954 [2024-12-12 20:38:29.921814] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:45.954 [2024-12-12 20:38:29.921820] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:45.954 [2024-12-12 20:38:29.921827] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:45.954 [2024-12-12 20:38:29.921833] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:45.954 [2024-12-12 20:38:29.921840] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:45.954 [2024-12-12 20:38:29.921846] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:45.954 [2024-12-12 20:38:29.921852] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:45.954 [2024-12-12 20:38:29.921859] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:45.954 [2024-12-12 20:38:29.921866] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:45.954 [2024-12-12 20:38:29.921873] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:45.954 [2024-12-12 20:38:29.921880] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:45.954 [2024-12-12 20:38:29.921887] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:45.954 [2024-12-12 20:38:29.921894] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:45.954 [2024-12-12 20:38:29.921902] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:45.954 [2024-12-12 20:38:29.921909] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:45.954 [2024-12-12 20:38:29.921916] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:45.955 [2024-12-12 20:38:29.921922] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:45.955 [2024-12-12 20:38:29.921929] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:45.955 [2024-12-12 20:38:29.921936] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:45.955 [2024-12-12 20:38:29.921945] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:45.955 [2024-12-12 20:38:29.921956] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:45.955 [2024-12-12 20:38:29.921963] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:45.955 [2024-12-12 20:38:29.921969] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:45.955 [2024-12-12 20:38:29.921976] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:45.955 [2024-12-12 20:38:29.921983] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:45.955 [2024-12-12 20:38:29.921990] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:45.955 [2024-12-12 20:38:29.921996] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:45.955 [2024-12-12 20:38:29.922003] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:45.955 [2024-12-12 20:38:29.922010] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:45.955 [2024-12-12 20:38:29.922017] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:45.955 [2024-12-12 20:38:29.922023] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:45.955 [2024-12-12 20:38:29.922030] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:45.955 [2024-12-12 20:38:29.922036] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:45.955 [2024-12-12 20:38:29.922043] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:45.955 [2024-12-12 20:38:29.922050] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:45.955 [2024-12-12 20:38:29.922058] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:45.955 [2024-12-12 20:38:29.922066] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:45.955 [2024-12-12 20:38:29.922073] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:45.955 [2024-12-12 20:38:29.922080] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:45.955 [2024-12-12 20:38:29.922088] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:45.955 [2024-12-12 20:38:29.922095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.955 [2024-12-12 20:38:29.922102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:45.955 [2024-12-12 20:38:29.922109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.599 ms 00:27:45.955 [2024-12-12 20:38:29.922115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.955 [2024-12-12 20:38:29.947764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.955 [2024-12-12 20:38:29.947793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:45.955 [2024-12-12 20:38:29.947803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.609 ms 00:27:45.955 [2024-12-12 20:38:29.947813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.955 [2024-12-12 20:38:29.947892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.955 [2024-12-12 20:38:29.947901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:45.955 [2024-12-12 20:38:29.947908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:27:45.955 [2024-12-12 20:38:29.947915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.955 [2024-12-12 20:38:29.987599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.955 [2024-12-12 20:38:29.987632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:45.955 [2024-12-12 20:38:29.987644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.635 ms 00:27:45.955 [2024-12-12 20:38:29.987652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.955 [2024-12-12 20:38:29.987690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.955 [2024-12-12 20:38:29.987699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:45.955 [2024-12-12 20:38:29.987711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:45.955 [2024-12-12 20:38:29.987718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.955 [2024-12-12 20:38:29.988076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.955 [2024-12-12 20:38:29.988099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:45.955 [2024-12-12 20:38:29.988108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.306 ms 00:27:45.955 [2024-12-12 20:38:29.988115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.955 [2024-12-12 20:38:29.988233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.955 [2024-12-12 20:38:29.988246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:45.955 [2024-12-12 20:38:29.988255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:27:45.955 [2024-12-12 20:38:29.988264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.955 [2024-12-12 20:38:30.001326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.955 [2024-12-12 20:38:30.001353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:45.955 [2024-12-12 20:38:30.001366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.044 ms 00:27:45.955 [2024-12-12 20:38:30.001374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.955 [2024-12-12 20:38:30.014071] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:45.955 [2024-12-12 20:38:30.014104] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:45.955 [2024-12-12 20:38:30.014115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.955 [2024-12-12 20:38:30.014123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:45.955 [2024-12-12 20:38:30.014132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.626 ms 00:27:45.955 [2024-12-12 20:38:30.014140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.955 [2024-12-12 20:38:30.037888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.955 [2024-12-12 20:38:30.037919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:45.955 [2024-12-12 20:38:30.037932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.711 ms 00:27:45.955 [2024-12-12 20:38:30.037941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.955 [2024-12-12 20:38:30.049531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.955 [2024-12-12 20:38:30.049578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:45.955 [2024-12-12 20:38:30.049588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.542 ms 00:27:45.955 [2024-12-12 20:38:30.049595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.955 [2024-12-12 20:38:30.061214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.955 [2024-12-12 20:38:30.061244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:45.955 [2024-12-12 20:38:30.061254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.587 ms 00:27:45.955 [2024-12-12 20:38:30.061262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.955 [2024-12-12 20:38:30.061899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.955 [2024-12-12 20:38:30.061919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:45.955 [2024-12-12 20:38:30.061930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.555 ms 00:27:45.955 [2024-12-12 20:38:30.061938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.955 [2024-12-12 20:38:30.116895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.955 [2024-12-12 20:38:30.116941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:45.955 [2024-12-12 20:38:30.116957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.939 ms 00:27:45.955 [2024-12-12 20:38:30.116966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.955 [2024-12-12 20:38:30.127222] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:45.955 [2024-12-12 20:38:30.129558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.955 [2024-12-12 20:38:30.129581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:45.955 [2024-12-12 20:38:30.129592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.550 ms 00:27:45.955 [2024-12-12 20:38:30.129600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.955 [2024-12-12 20:38:30.129687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.955 [2024-12-12 20:38:30.129697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:45.955 [2024-12-12 20:38:30.129706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:27:45.955 [2024-12-12 20:38:30.129716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.955 [2024-12-12 20:38:30.130285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.955 [2024-12-12 20:38:30.130309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:45.955 [2024-12-12 20:38:30.130318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.532 ms 00:27:45.955 [2024-12-12 20:38:30.130325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.955 [2024-12-12 20:38:30.130348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.955 [2024-12-12 20:38:30.130356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:45.955 [2024-12-12 20:38:30.130364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:45.955 [2024-12-12 20:38:30.130371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.955 [2024-12-12 20:38:30.130404] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:45.955 [2024-12-12 20:38:30.130425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.955 [2024-12-12 20:38:30.130433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:45.955 [2024-12-12 20:38:30.130441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:27:45.955 [2024-12-12 20:38:30.130448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.955 [2024-12-12 20:38:30.153749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.955 [2024-12-12 20:38:30.153779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:45.956 [2024-12-12 20:38:30.153793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.284 ms 00:27:45.956 [2024-12-12 20:38:30.153807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.956 [2024-12-12 20:38:30.153871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.956 [2024-12-12 20:38:30.153880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:45.956 [2024-12-12 20:38:30.153887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:27:45.956 [2024-12-12 20:38:30.153895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.956 [2024-12-12 20:38:30.154895] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 257.044 ms, result 0 00:27:47.329  [2024-12-12T20:38:32.491Z] Copying: 21/1024 [MB] (21 MBps) [2024-12-12T20:38:33.426Z] Copying: 32/1024 [MB] (11 MBps) [2024-12-12T20:38:34.360Z] Copying: 44/1024 [MB] (11 MBps) [2024-12-12T20:38:35.732Z] Copying: 55/1024 [MB] (11 MBps) [2024-12-12T20:38:36.666Z] Copying: 67/1024 [MB] (11 MBps) [2024-12-12T20:38:37.598Z] Copying: 78/1024 [MB] (11 MBps) [2024-12-12T20:38:38.531Z] Copying: 89/1024 [MB] (10 MBps) [2024-12-12T20:38:39.465Z] Copying: 101/1024 [MB] (12 MBps) [2024-12-12T20:38:40.401Z] Copying: 114/1024 [MB] (12 MBps) [2024-12-12T20:38:41.334Z] Copying: 127/1024 [MB] (13 MBps) [2024-12-12T20:38:42.721Z] Copying: 146/1024 [MB] (18 MBps) [2024-12-12T20:38:43.656Z] Copying: 156/1024 [MB] (10 MBps) [2024-12-12T20:38:44.591Z] Copying: 171/1024 [MB] (14 MBps) [2024-12-12T20:38:45.526Z] Copying: 181/1024 [MB] (10 MBps) [2024-12-12T20:38:46.460Z] Copying: 193/1024 [MB] (11 MBps) [2024-12-12T20:38:47.396Z] Copying: 205/1024 [MB] (11 MBps) [2024-12-12T20:38:48.330Z] Copying: 216/1024 [MB] (11 MBps) [2024-12-12T20:38:49.705Z] Copying: 232/1024 [MB] (15 MBps) [2024-12-12T20:38:50.658Z] Copying: 250/1024 [MB] (18 MBps) [2024-12-12T20:38:51.616Z] Copying: 275/1024 [MB] (24 MBps) [2024-12-12T20:38:52.556Z] Copying: 294/1024 [MB] (19 MBps) [2024-12-12T20:38:53.490Z] Copying: 314/1024 [MB] (19 MBps) [2024-12-12T20:38:54.424Z] Copying: 331/1024 [MB] (16 MBps) [2024-12-12T20:38:55.358Z] Copying: 345/1024 [MB] (13 MBps) [2024-12-12T20:38:56.732Z] Copying: 358/1024 [MB] (13 MBps) [2024-12-12T20:38:57.666Z] Copying: 372/1024 [MB] (13 MBps) [2024-12-12T20:38:58.600Z] Copying: 386/1024 [MB] (13 MBps) [2024-12-12T20:38:59.534Z] Copying: 404/1024 [MB] (17 MBps) [2024-12-12T20:39:00.465Z] Copying: 419/1024 [MB] (15 MBps) [2024-12-12T20:39:01.398Z] Copying: 440/1024 [MB] (21 MBps) [2024-12-12T20:39:02.332Z] Copying: 458/1024 [MB] (17 MBps) [2024-12-12T20:39:03.705Z] Copying: 470/1024 [MB] (12 MBps) [2024-12-12T20:39:04.638Z] Copying: 486/1024 [MB] (15 MBps) [2024-12-12T20:39:05.571Z] Copying: 507/1024 [MB] (20 MBps) [2024-12-12T20:39:06.533Z] Copying: 521/1024 [MB] (14 MBps) [2024-12-12T20:39:07.465Z] Copying: 535/1024 [MB] (13 MBps) [2024-12-12T20:39:08.399Z] Copying: 548/1024 [MB] (13 MBps) [2024-12-12T20:39:09.332Z] Copying: 564/1024 [MB] (15 MBps) [2024-12-12T20:39:10.706Z] Copying: 578/1024 [MB] (14 MBps) [2024-12-12T20:39:11.640Z] Copying: 601/1024 [MB] (22 MBps) [2024-12-12T20:39:12.574Z] Copying: 619/1024 [MB] (18 MBps) [2024-12-12T20:39:13.508Z] Copying: 641/1024 [MB] (21 MBps) [2024-12-12T20:39:14.481Z] Copying: 658/1024 [MB] (17 MBps) [2024-12-12T20:39:15.414Z] Copying: 680/1024 [MB] (21 MBps) [2024-12-12T20:39:16.348Z] Copying: 695/1024 [MB] (14 MBps) [2024-12-12T20:39:17.723Z] Copying: 715/1024 [MB] (20 MBps) [2024-12-12T20:39:18.660Z] Copying: 728/1024 [MB] (13 MBps) [2024-12-12T20:39:19.605Z] Copying: 742/1024 [MB] (13 MBps) [2024-12-12T20:39:20.543Z] Copying: 754/1024 [MB] (11 MBps) [2024-12-12T20:39:21.478Z] Copying: 765/1024 [MB] (11 MBps) [2024-12-12T20:39:22.422Z] Copying: 779/1024 [MB] (13 MBps) [2024-12-12T20:39:23.356Z] Copying: 793/1024 [MB] (14 MBps) [2024-12-12T20:39:24.731Z] Copying: 807/1024 [MB] (14 MBps) [2024-12-12T20:39:25.666Z] Copying: 822/1024 [MB] (14 MBps) [2024-12-12T20:39:26.599Z] Copying: 836/1024 [MB] (13 MBps) [2024-12-12T20:39:27.533Z] Copying: 849/1024 [MB] (13 MBps) [2024-12-12T20:39:28.467Z] Copying: 862/1024 [MB] (13 MBps) [2024-12-12T20:39:29.401Z] Copying: 876/1024 [MB] (13 MBps) [2024-12-12T20:39:30.333Z] Copying: 889/1024 [MB] (13 MBps) [2024-12-12T20:39:31.706Z] Copying: 902/1024 [MB] (12 MBps) [2024-12-12T20:39:32.640Z] Copying: 915/1024 [MB] (12 MBps) [2024-12-12T20:39:33.575Z] Copying: 928/1024 [MB] (13 MBps) [2024-12-12T20:39:34.509Z] Copying: 940/1024 [MB] (12 MBps) [2024-12-12T20:39:35.444Z] Copying: 954/1024 [MB] (13 MBps) [2024-12-12T20:39:36.378Z] Copying: 966/1024 [MB] (12 MBps) [2024-12-12T20:39:37.327Z] Copying: 979/1024 [MB] (12 MBps) [2024-12-12T20:39:38.702Z] Copying: 991/1024 [MB] (12 MBps) [2024-12-12T20:39:39.636Z] Copying: 1004/1024 [MB] (12 MBps) [2024-12-12T20:39:39.895Z] Copying: 1017/1024 [MB] (12 MBps) [2024-12-12T20:39:40.154Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-12-12 20:39:39.926669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:55.926 [2024-12-12 20:39:39.926745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:55.926 [2024-12-12 20:39:39.926763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:55.926 [2024-12-12 20:39:39.926774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:55.926 [2024-12-12 20:39:39.926803] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:55.926 [2024-12-12 20:39:39.930893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:55.926 [2024-12-12 20:39:39.930935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:55.926 [2024-12-12 20:39:39.930948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.071 ms 00:28:55.926 [2024-12-12 20:39:39.930958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:55.926 [2024-12-12 20:39:39.931290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:55.926 [2024-12-12 20:39:39.931311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:55.926 [2024-12-12 20:39:39.931324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.303 ms 00:28:55.926 [2024-12-12 20:39:39.931334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:55.927 [2024-12-12 20:39:39.936294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:55.927 [2024-12-12 20:39:39.936320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:55.927 [2024-12-12 20:39:39.936333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.942 ms 00:28:55.927 [2024-12-12 20:39:39.936347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:55.927 [2024-12-12 20:39:39.942982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:55.927 [2024-12-12 20:39:39.943011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:55.927 [2024-12-12 20:39:39.943021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.615 ms 00:28:55.927 [2024-12-12 20:39:39.943028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:55.927 [2024-12-12 20:39:39.967501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:55.927 [2024-12-12 20:39:39.967536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:55.927 [2024-12-12 20:39:39.967546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.426 ms 00:28:55.927 [2024-12-12 20:39:39.967553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:55.927 [2024-12-12 20:39:39.981928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:55.927 [2024-12-12 20:39:39.981962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:55.927 [2024-12-12 20:39:39.981973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.343 ms 00:28:55.927 [2024-12-12 20:39:39.981980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:55.927 [2024-12-12 20:39:39.984488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:55.927 [2024-12-12 20:39:39.984519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:55.927 [2024-12-12 20:39:39.984529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.470 ms 00:28:55.927 [2024-12-12 20:39:39.984536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:55.927 [2024-12-12 20:39:40.007838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:55.927 [2024-12-12 20:39:40.007874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:55.927 [2024-12-12 20:39:40.007885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.287 ms 00:28:55.927 [2024-12-12 20:39:40.007893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:55.927 [2024-12-12 20:39:40.031593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:55.927 [2024-12-12 20:39:40.031625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:55.927 [2024-12-12 20:39:40.031635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.669 ms 00:28:55.927 [2024-12-12 20:39:40.031643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:55.927 [2024-12-12 20:39:40.055114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:55.927 [2024-12-12 20:39:40.055147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:55.927 [2024-12-12 20:39:40.055157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.440 ms 00:28:55.927 [2024-12-12 20:39:40.055164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:55.927 [2024-12-12 20:39:40.077836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:55.927 [2024-12-12 20:39:40.077867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:55.927 [2024-12-12 20:39:40.077877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.617 ms 00:28:55.927 [2024-12-12 20:39:40.077884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:55.927 [2024-12-12 20:39:40.077914] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:55.927 [2024-12-12 20:39:40.077932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:55.927 [2024-12-12 20:39:40.077944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:28:55.927 [2024-12-12 20:39:40.077952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:55.927 [2024-12-12 20:39:40.077960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:55.927 [2024-12-12 20:39:40.077968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:55.927 [2024-12-12 20:39:40.077975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:55.927 [2024-12-12 20:39:40.077982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:55.927 [2024-12-12 20:39:40.077990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:55.927 [2024-12-12 20:39:40.077997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:55.927 [2024-12-12 20:39:40.078004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:55.927 [2024-12-12 20:39:40.078012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:55.927 [2024-12-12 20:39:40.078020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:55.927 [2024-12-12 20:39:40.078027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:55.927 [2024-12-12 20:39:40.078034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:55.927 [2024-12-12 20:39:40.078041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:55.927 [2024-12-12 20:39:40.078049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:55.927 [2024-12-12 20:39:40.078056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:55.927 [2024-12-12 20:39:40.078063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:55.927 [2024-12-12 20:39:40.078070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:55.927 [2024-12-12 20:39:40.078077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:55.927 [2024-12-12 20:39:40.078084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:55.927 [2024-12-12 20:39:40.078092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:55.927 [2024-12-12 20:39:40.078099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:55.927 [2024-12-12 20:39:40.078106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:55.927 [2024-12-12 20:39:40.078113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:55.927 [2024-12-12 20:39:40.078120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:55.927 [2024-12-12 20:39:40.078127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:55.927 [2024-12-12 20:39:40.078135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:55.927 [2024-12-12 20:39:40.078142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:55.927 [2024-12-12 20:39:40.078150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:55.927 [2024-12-12 20:39:40.078157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:55.927 [2024-12-12 20:39:40.078164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:55.927 [2024-12-12 20:39:40.078171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:55.927 [2024-12-12 20:39:40.078178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:55.927 [2024-12-12 20:39:40.078185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:55.927 [2024-12-12 20:39:40.078192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:55.928 [2024-12-12 20:39:40.078199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:55.928 [2024-12-12 20:39:40.078206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:55.928 [2024-12-12 20:39:40.078213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:55.928 [2024-12-12 20:39:40.078220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:55.928 [2024-12-12 20:39:40.078228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:55.928 [2024-12-12 20:39:40.078235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:55.928 [2024-12-12 20:39:40.078242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:55.928 [2024-12-12 20:39:40.078249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:55.928 [2024-12-12 20:39:40.078256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:55.928 [2024-12-12 20:39:40.078263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:55.928 [2024-12-12 20:39:40.078270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:55.928 [2024-12-12 20:39:40.078277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:55.928 [2024-12-12 20:39:40.078284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:55.928 [2024-12-12 20:39:40.078291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:55.928 [2024-12-12 20:39:40.078298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:55.928 [2024-12-12 20:39:40.078306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:55.928 [2024-12-12 20:39:40.078313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:55.928 [2024-12-12 20:39:40.078320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:55.928 [2024-12-12 20:39:40.078327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:55.928 [2024-12-12 20:39:40.078335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:55.928 [2024-12-12 20:39:40.078342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:55.928 [2024-12-12 20:39:40.078349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:55.928 [2024-12-12 20:39:40.078357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:55.928 [2024-12-12 20:39:40.078364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:55.928 [2024-12-12 20:39:40.078371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:55.928 [2024-12-12 20:39:40.078380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:55.928 [2024-12-12 20:39:40.078387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:55.928 [2024-12-12 20:39:40.078394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:55.928 [2024-12-12 20:39:40.078402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:55.928 [2024-12-12 20:39:40.078409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:55.928 [2024-12-12 20:39:40.078430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:55.928 [2024-12-12 20:39:40.078438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:55.928 [2024-12-12 20:39:40.078446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:55.928 [2024-12-12 20:39:40.078453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:55.928 [2024-12-12 20:39:40.078461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:55.928 [2024-12-12 20:39:40.078468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:55.928 [2024-12-12 20:39:40.078476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:55.928 [2024-12-12 20:39:40.078483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:55.928 [2024-12-12 20:39:40.078490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:55.928 [2024-12-12 20:39:40.078497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:55.928 [2024-12-12 20:39:40.078504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:55.928 [2024-12-12 20:39:40.078512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:55.928 [2024-12-12 20:39:40.078519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:55.928 [2024-12-12 20:39:40.078526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:55.928 [2024-12-12 20:39:40.078533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:55.928 [2024-12-12 20:39:40.078540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:55.928 [2024-12-12 20:39:40.078548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:55.928 [2024-12-12 20:39:40.078555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:55.928 [2024-12-12 20:39:40.078563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:55.928 [2024-12-12 20:39:40.078571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:55.928 [2024-12-12 20:39:40.078578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:55.928 [2024-12-12 20:39:40.078585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:55.928 [2024-12-12 20:39:40.078592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:55.928 [2024-12-12 20:39:40.078599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:55.928 [2024-12-12 20:39:40.078606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:55.928 [2024-12-12 20:39:40.078613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:55.928 [2024-12-12 20:39:40.078621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:55.928 [2024-12-12 20:39:40.078629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:55.928 [2024-12-12 20:39:40.078636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:55.928 [2024-12-12 20:39:40.078643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:55.928 [2024-12-12 20:39:40.078650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:55.928 [2024-12-12 20:39:40.078657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:55.928 [2024-12-12 20:39:40.078665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:55.928 [2024-12-12 20:39:40.078672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:55.928 [2024-12-12 20:39:40.078687] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:55.929 [2024-12-12 20:39:40.078694] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: fcd2b5d7-4ee6-453c-9330-4be43066ede6 00:28:55.929 [2024-12-12 20:39:40.078702] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:28:55.929 [2024-12-12 20:39:40.078709] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:55.929 [2024-12-12 20:39:40.078716] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:55.929 [2024-12-12 20:39:40.078724] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:55.929 [2024-12-12 20:39:40.078736] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:55.929 [2024-12-12 20:39:40.078744] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:55.929 [2024-12-12 20:39:40.078751] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:55.929 [2024-12-12 20:39:40.078758] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:55.929 [2024-12-12 20:39:40.078764] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:55.929 [2024-12-12 20:39:40.078771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:55.929 [2024-12-12 20:39:40.078778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:55.929 [2024-12-12 20:39:40.078786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.858 ms 00:28:55.929 [2024-12-12 20:39:40.078794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:55.929 [2024-12-12 20:39:40.090898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:55.929 [2024-12-12 20:39:40.090927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:55.929 [2024-12-12 20:39:40.090938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.088 ms 00:28:55.929 [2024-12-12 20:39:40.090945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:55.929 [2024-12-12 20:39:40.091279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:55.929 [2024-12-12 20:39:40.091296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:55.929 [2024-12-12 20:39:40.091305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.318 ms 00:28:55.929 [2024-12-12 20:39:40.091312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:55.929 [2024-12-12 20:39:40.123667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:55.929 [2024-12-12 20:39:40.123702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:55.929 [2024-12-12 20:39:40.123711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:55.929 [2024-12-12 20:39:40.123719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:55.929 [2024-12-12 20:39:40.123775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:55.929 [2024-12-12 20:39:40.123785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:55.929 [2024-12-12 20:39:40.123793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:55.929 [2024-12-12 20:39:40.123800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:55.929 [2024-12-12 20:39:40.123857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:55.929 [2024-12-12 20:39:40.123867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:55.929 [2024-12-12 20:39:40.123874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:55.929 [2024-12-12 20:39:40.123881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:55.929 [2024-12-12 20:39:40.123895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:55.929 [2024-12-12 20:39:40.123903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:55.929 [2024-12-12 20:39:40.123913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:55.929 [2024-12-12 20:39:40.123920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.187 [2024-12-12 20:39:40.200126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:56.187 [2024-12-12 20:39:40.200169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:56.187 [2024-12-12 20:39:40.200181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:56.187 [2024-12-12 20:39:40.200188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.187 [2024-12-12 20:39:40.261930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:56.187 [2024-12-12 20:39:40.261976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:56.187 [2024-12-12 20:39:40.261987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:56.187 [2024-12-12 20:39:40.261994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.187 [2024-12-12 20:39:40.262061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:56.187 [2024-12-12 20:39:40.262070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:56.188 [2024-12-12 20:39:40.262079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:56.188 [2024-12-12 20:39:40.262086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.188 [2024-12-12 20:39:40.262117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:56.188 [2024-12-12 20:39:40.262126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:56.188 [2024-12-12 20:39:40.262133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:56.188 [2024-12-12 20:39:40.262142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.188 [2024-12-12 20:39:40.262226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:56.188 [2024-12-12 20:39:40.262235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:56.188 [2024-12-12 20:39:40.262243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:56.188 [2024-12-12 20:39:40.262250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.188 [2024-12-12 20:39:40.262277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:56.188 [2024-12-12 20:39:40.262286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:56.188 [2024-12-12 20:39:40.262293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:56.188 [2024-12-12 20:39:40.262300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.188 [2024-12-12 20:39:40.262335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:56.188 [2024-12-12 20:39:40.262343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:56.188 [2024-12-12 20:39:40.262350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:56.188 [2024-12-12 20:39:40.262357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.188 [2024-12-12 20:39:40.262392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:56.188 [2024-12-12 20:39:40.262402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:56.188 [2024-12-12 20:39:40.262410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:56.188 [2024-12-12 20:39:40.262440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.188 [2024-12-12 20:39:40.262546] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 335.870 ms, result 0 00:28:56.754 00:28:56.754 00:28:56.754 20:39:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:28:59.285 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:28:59.285 20:39:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:28:59.285 20:39:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:28:59.285 20:39:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:59.285 20:39:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:28:59.285 20:39:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:28:59.285 20:39:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:59.285 20:39:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:28:59.285 Process with pid 81530 is not found 00:28:59.285 20:39:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 81530 00:28:59.285 20:39:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 81530 ']' 00:28:59.285 20:39:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 81530 00:28:59.285 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (81530) - No such process 00:28:59.285 20:39:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 81530 is not found' 00:28:59.285 20:39:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:28:59.544 Remove shared memory files 00:28:59.544 20:39:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:28:59.544 20:39:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:59.544 20:39:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:28:59.544 20:39:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:28:59.544 20:39:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:28:59.544 20:39:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:59.544 20:39:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:28:59.544 00:28:59.544 real 3m54.274s 00:28:59.544 user 4m12.188s 00:28:59.544 sys 0m22.995s 00:28:59.544 20:39:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:59.544 ************************************ 00:28:59.544 END TEST ftl_dirty_shutdown 00:28:59.544 ************************************ 00:28:59.544 20:39:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:59.544 20:39:43 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:28:59.544 20:39:43 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:59.544 20:39:43 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:59.544 20:39:43 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:59.544 ************************************ 00:28:59.544 START TEST ftl_upgrade_shutdown 00:28:59.544 ************************************ 00:28:59.544 20:39:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:28:59.544 * Looking for test storage... 00:28:59.803 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:28:59.803 20:39:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:59.803 20:39:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:28:59.803 20:39:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:59.803 20:39:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:59.803 20:39:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:59.803 20:39:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:59.803 20:39:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:59.803 20:39:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:28:59.803 20:39:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:28:59.803 20:39:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:28:59.803 20:39:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:28:59.803 20:39:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:28:59.803 20:39:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:28:59.803 20:39:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:28:59.803 20:39:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:59.803 20:39:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:28:59.803 20:39:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:28:59.803 20:39:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:59.803 20:39:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:59.803 20:39:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:28:59.803 20:39:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:28:59.803 20:39:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:59.803 20:39:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:28:59.803 20:39:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:28:59.803 20:39:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:28:59.803 20:39:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:28:59.803 20:39:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:59.803 20:39:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:28:59.803 20:39:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:28:59.803 20:39:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:59.803 20:39:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:59.803 20:39:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:28:59.803 20:39:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:59.803 20:39:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:59.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:59.803 --rc genhtml_branch_coverage=1 00:28:59.803 --rc genhtml_function_coverage=1 00:28:59.803 --rc genhtml_legend=1 00:28:59.803 --rc geninfo_all_blocks=1 00:28:59.803 --rc geninfo_unexecuted_blocks=1 00:28:59.803 00:28:59.803 ' 00:28:59.803 20:39:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:59.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:59.803 --rc genhtml_branch_coverage=1 00:28:59.803 --rc genhtml_function_coverage=1 00:28:59.803 --rc genhtml_legend=1 00:28:59.803 --rc geninfo_all_blocks=1 00:28:59.803 --rc geninfo_unexecuted_blocks=1 00:28:59.803 00:28:59.803 ' 00:28:59.803 20:39:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:59.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:59.803 --rc genhtml_branch_coverage=1 00:28:59.803 --rc genhtml_function_coverage=1 00:28:59.803 --rc genhtml_legend=1 00:28:59.803 --rc geninfo_all_blocks=1 00:28:59.803 --rc geninfo_unexecuted_blocks=1 00:28:59.803 00:28:59.803 ' 00:28:59.803 20:39:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:59.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:59.803 --rc genhtml_branch_coverage=1 00:28:59.803 --rc genhtml_function_coverage=1 00:28:59.803 --rc genhtml_legend=1 00:28:59.803 --rc geninfo_all_blocks=1 00:28:59.803 --rc geninfo_unexecuted_blocks=1 00:28:59.803 00:28:59.803 ' 00:28:59.803 20:39:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:28:59.803 20:39:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:28:59.803 20:39:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:28:59.803 20:39:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:28:59.803 20:39:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:28:59.803 20:39:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:28:59.803 20:39:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:59.803 20:39:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:28:59.803 20:39:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:28:59.803 20:39:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:59.803 20:39:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:59.803 20:39:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:28:59.803 20:39:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:28:59.803 20:39:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:59.803 20:39:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:59.803 20:39:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:28:59.803 20:39:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:28:59.803 20:39:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:59.803 20:39:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:59.803 20:39:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:28:59.803 20:39:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:28:59.803 20:39:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:59.803 20:39:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:59.803 20:39:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:59.803 20:39:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:59.803 20:39:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:28:59.803 20:39:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:28:59.803 20:39:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:59.803 20:39:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:59.803 20:39:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:28:59.803 20:39:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:28:59.803 20:39:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:28:59.803 20:39:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:28:59.803 20:39:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:28:59.803 20:39:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:28:59.803 20:39:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:28:59.803 20:39:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:28:59.803 20:39:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:28:59.803 20:39:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:28:59.803 20:39:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:28:59.803 20:39:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:28:59.803 20:39:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:28:59.803 20:39:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:28:59.803 20:39:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:28:59.803 20:39:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:28:59.803 20:39:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:59.803 20:39:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84070 00:28:59.803 20:39:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:28:59.803 20:39:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84070 00:28:59.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:59.803 20:39:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84070 ']' 00:28:59.803 20:39:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:59.804 20:39:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:59.804 20:39:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:59.804 20:39:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:59.804 20:39:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:59.804 20:39:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:28:59.804 [2024-12-12 20:39:43.934353] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:28:59.804 [2024-12-12 20:39:43.934480] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84070 ] 00:29:00.062 [2024-12-12 20:39:44.093364] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:00.062 [2024-12-12 20:39:44.191745] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:00.631 20:39:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:00.631 20:39:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:29:00.631 20:39:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:00.631 20:39:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:29:00.631 20:39:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:29:00.631 20:39:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:00.631 20:39:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:29:00.631 20:39:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:00.631 20:39:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:29:00.631 20:39:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:00.631 20:39:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:29:00.631 20:39:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:00.631 20:39:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:29:00.631 20:39:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:00.631 20:39:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:29:00.631 20:39:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:00.631 20:39:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:29:00.631 20:39:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:29:00.631 20:39:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:29:00.631 20:39:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:29:00.631 20:39:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:29:00.631 20:39:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:29:00.631 20:39:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:29:00.889 20:39:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:29:00.889 20:39:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:29:00.889 20:39:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:29:00.889 20:39:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:29:00.889 20:39:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:00.889 20:39:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:29:00.889 20:39:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:29:00.889 20:39:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:29:01.148 20:39:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:01.148 { 00:29:01.148 "name": "basen1", 00:29:01.148 "aliases": [ 00:29:01.148 "04543abb-dfc3-4db8-a2f0-f48ef2a0c24f" 00:29:01.148 ], 00:29:01.148 "product_name": "NVMe disk", 00:29:01.148 "block_size": 4096, 00:29:01.148 "num_blocks": 1310720, 00:29:01.148 "uuid": "04543abb-dfc3-4db8-a2f0-f48ef2a0c24f", 00:29:01.148 "numa_id": -1, 00:29:01.148 "assigned_rate_limits": { 00:29:01.148 "rw_ios_per_sec": 0, 00:29:01.148 "rw_mbytes_per_sec": 0, 00:29:01.148 "r_mbytes_per_sec": 0, 00:29:01.148 "w_mbytes_per_sec": 0 00:29:01.148 }, 00:29:01.148 "claimed": true, 00:29:01.148 "claim_type": "read_many_write_one", 00:29:01.148 "zoned": false, 00:29:01.148 "supported_io_types": { 00:29:01.148 "read": true, 00:29:01.148 "write": true, 00:29:01.148 "unmap": true, 00:29:01.148 "flush": true, 00:29:01.148 "reset": true, 00:29:01.148 "nvme_admin": true, 00:29:01.148 "nvme_io": true, 00:29:01.148 "nvme_io_md": false, 00:29:01.148 "write_zeroes": true, 00:29:01.148 "zcopy": false, 00:29:01.148 "get_zone_info": false, 00:29:01.148 "zone_management": false, 00:29:01.148 "zone_append": false, 00:29:01.148 "compare": true, 00:29:01.148 "compare_and_write": false, 00:29:01.148 "abort": true, 00:29:01.148 "seek_hole": false, 00:29:01.148 "seek_data": false, 00:29:01.148 "copy": true, 00:29:01.148 "nvme_iov_md": false 00:29:01.148 }, 00:29:01.148 "driver_specific": { 00:29:01.148 "nvme": [ 00:29:01.148 { 00:29:01.148 "pci_address": "0000:00:11.0", 00:29:01.148 "trid": { 00:29:01.148 "trtype": "PCIe", 00:29:01.148 "traddr": "0000:00:11.0" 00:29:01.148 }, 00:29:01.148 "ctrlr_data": { 00:29:01.148 "cntlid": 0, 00:29:01.148 "vendor_id": "0x1b36", 00:29:01.148 "model_number": "QEMU NVMe Ctrl", 00:29:01.148 "serial_number": "12341", 00:29:01.148 "firmware_revision": "8.0.0", 00:29:01.148 "subnqn": "nqn.2019-08.org.qemu:12341", 00:29:01.148 "oacs": { 00:29:01.148 "security": 0, 00:29:01.148 "format": 1, 00:29:01.148 "firmware": 0, 00:29:01.148 "ns_manage": 1 00:29:01.148 }, 00:29:01.148 "multi_ctrlr": false, 00:29:01.148 "ana_reporting": false 00:29:01.148 }, 00:29:01.148 "vs": { 00:29:01.148 "nvme_version": "1.4" 00:29:01.148 }, 00:29:01.148 "ns_data": { 00:29:01.148 "id": 1, 00:29:01.148 "can_share": false 00:29:01.148 } 00:29:01.148 } 00:29:01.148 ], 00:29:01.148 "mp_policy": "active_passive" 00:29:01.148 } 00:29:01.148 } 00:29:01.148 ]' 00:29:01.148 20:39:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:01.148 20:39:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:29:01.148 20:39:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:01.148 20:39:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:29:01.148 20:39:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:29:01.148 20:39:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:29:01.148 20:39:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:29:01.148 20:39:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:29:01.148 20:39:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:29:01.148 20:39:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:29:01.148 20:39:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:01.406 20:39:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=929c7234-5cf5-4486-930a-a1fd5746c450 00:29:01.406 20:39:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:29:01.406 20:39:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 929c7234-5cf5-4486-930a-a1fd5746c450 00:29:01.664 20:39:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:29:01.923 20:39:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=5c195810-a264-4a7f-ac05-79f6afc7a4c8 00:29:01.923 20:39:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 5c195810-a264-4a7f-ac05-79f6afc7a4c8 00:29:02.181 20:39:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=3fb9f779-da0f-49fa-9a71-a18602fcd4ea 00:29:02.181 20:39:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 3fb9f779-da0f-49fa-9a71-a18602fcd4ea ]] 00:29:02.181 20:39:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 3fb9f779-da0f-49fa-9a71-a18602fcd4ea 5120 00:29:02.181 20:39:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:29:02.181 20:39:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:29:02.181 20:39:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=3fb9f779-da0f-49fa-9a71-a18602fcd4ea 00:29:02.181 20:39:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:29:02.181 20:39:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 3fb9f779-da0f-49fa-9a71-a18602fcd4ea 00:29:02.181 20:39:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=3fb9f779-da0f-49fa-9a71-a18602fcd4ea 00:29:02.181 20:39:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:02.181 20:39:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:29:02.181 20:39:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:29:02.181 20:39:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3fb9f779-da0f-49fa-9a71-a18602fcd4ea 00:29:02.181 20:39:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:02.181 { 00:29:02.181 "name": "3fb9f779-da0f-49fa-9a71-a18602fcd4ea", 00:29:02.181 "aliases": [ 00:29:02.181 "lvs/basen1p0" 00:29:02.181 ], 00:29:02.181 "product_name": "Logical Volume", 00:29:02.181 "block_size": 4096, 00:29:02.181 "num_blocks": 5242880, 00:29:02.181 "uuid": "3fb9f779-da0f-49fa-9a71-a18602fcd4ea", 00:29:02.181 "assigned_rate_limits": { 00:29:02.181 "rw_ios_per_sec": 0, 00:29:02.181 "rw_mbytes_per_sec": 0, 00:29:02.181 "r_mbytes_per_sec": 0, 00:29:02.181 "w_mbytes_per_sec": 0 00:29:02.181 }, 00:29:02.181 "claimed": false, 00:29:02.181 "zoned": false, 00:29:02.181 "supported_io_types": { 00:29:02.181 "read": true, 00:29:02.181 "write": true, 00:29:02.181 "unmap": true, 00:29:02.181 "flush": false, 00:29:02.181 "reset": true, 00:29:02.181 "nvme_admin": false, 00:29:02.181 "nvme_io": false, 00:29:02.181 "nvme_io_md": false, 00:29:02.181 "write_zeroes": true, 00:29:02.181 "zcopy": false, 00:29:02.181 "get_zone_info": false, 00:29:02.181 "zone_management": false, 00:29:02.181 "zone_append": false, 00:29:02.181 "compare": false, 00:29:02.181 "compare_and_write": false, 00:29:02.181 "abort": false, 00:29:02.181 "seek_hole": true, 00:29:02.181 "seek_data": true, 00:29:02.181 "copy": false, 00:29:02.181 "nvme_iov_md": false 00:29:02.181 }, 00:29:02.181 "driver_specific": { 00:29:02.181 "lvol": { 00:29:02.181 "lvol_store_uuid": "5c195810-a264-4a7f-ac05-79f6afc7a4c8", 00:29:02.181 "base_bdev": "basen1", 00:29:02.181 "thin_provision": true, 00:29:02.181 "num_allocated_clusters": 0, 00:29:02.181 "snapshot": false, 00:29:02.181 "clone": false, 00:29:02.181 "esnap_clone": false 00:29:02.181 } 00:29:02.181 } 00:29:02.181 } 00:29:02.181 ]' 00:29:02.181 20:39:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:02.440 20:39:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:29:02.440 20:39:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:02.440 20:39:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:29:02.440 20:39:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:29:02.440 20:39:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:29:02.440 20:39:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:29:02.440 20:39:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:29:02.440 20:39:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:29:02.698 20:39:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:29:02.698 20:39:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:29:02.698 20:39:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:29:02.698 20:39:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:29:02.698 20:39:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:29:02.698 20:39:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 3fb9f779-da0f-49fa-9a71-a18602fcd4ea -c cachen1p0 --l2p_dram_limit 2 00:29:02.957 [2024-12-12 20:39:47.079965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.957 [2024-12-12 20:39:47.080159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:29:02.957 [2024-12-12 20:39:47.080182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:02.957 [2024-12-12 20:39:47.080191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.957 [2024-12-12 20:39:47.080269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.957 [2024-12-12 20:39:47.080280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:02.957 [2024-12-12 20:39:47.080289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.049 ms 00:29:02.957 [2024-12-12 20:39:47.080297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.957 [2024-12-12 20:39:47.080318] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:29:02.957 [2024-12-12 20:39:47.081092] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:29:02.957 [2024-12-12 20:39:47.081113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.957 [2024-12-12 20:39:47.081120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:02.957 [2024-12-12 20:39:47.081132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.797 ms 00:29:02.957 [2024-12-12 20:39:47.081140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.957 [2024-12-12 20:39:47.081171] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID b25e1174-f384-4aa0-a287-d33b4249ab0a 00:29:02.957 [2024-12-12 20:39:47.082266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.957 [2024-12-12 20:39:47.082299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:29:02.957 [2024-12-12 20:39:47.082310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:29:02.957 [2024-12-12 20:39:47.082319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.957 [2024-12-12 20:39:47.087521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.957 [2024-12-12 20:39:47.087554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:02.957 [2024-12-12 20:39:47.087563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.132 ms 00:29:02.957 [2024-12-12 20:39:47.087572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.957 [2024-12-12 20:39:47.087609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.957 [2024-12-12 20:39:47.087619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:02.957 [2024-12-12 20:39:47.087628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:29:02.957 [2024-12-12 20:39:47.087638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.957 [2024-12-12 20:39:47.087690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.957 [2024-12-12 20:39:47.087703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:29:02.957 [2024-12-12 20:39:47.087711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:29:02.957 [2024-12-12 20:39:47.087723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.957 [2024-12-12 20:39:47.087744] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:29:02.957 [2024-12-12 20:39:47.091302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.957 [2024-12-12 20:39:47.091430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:02.957 [2024-12-12 20:39:47.091450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.561 ms 00:29:02.957 [2024-12-12 20:39:47.091458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.957 [2024-12-12 20:39:47.091489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.957 [2024-12-12 20:39:47.091497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:29:02.957 [2024-12-12 20:39:47.091506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:02.957 [2024-12-12 20:39:47.091514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.957 [2024-12-12 20:39:47.091539] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:29:02.958 [2024-12-12 20:39:47.091678] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:29:02.958 [2024-12-12 20:39:47.091694] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:29:02.958 [2024-12-12 20:39:47.091704] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:29:02.958 [2024-12-12 20:39:47.091715] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:29:02.958 [2024-12-12 20:39:47.091723] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:29:02.958 [2024-12-12 20:39:47.091733] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:29:02.958 [2024-12-12 20:39:47.091740] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:29:02.958 [2024-12-12 20:39:47.091752] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:29:02.958 [2024-12-12 20:39:47.091759] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:29:02.958 [2024-12-12 20:39:47.091768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.958 [2024-12-12 20:39:47.091775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:29:02.958 [2024-12-12 20:39:47.091785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.231 ms 00:29:02.958 [2024-12-12 20:39:47.091792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.958 [2024-12-12 20:39:47.091876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.958 [2024-12-12 20:39:47.091891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:29:02.958 [2024-12-12 20:39:47.091899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.068 ms 00:29:02.958 [2024-12-12 20:39:47.091906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.958 [2024-12-12 20:39:47.092017] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:29:02.958 [2024-12-12 20:39:47.092027] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:29:02.958 [2024-12-12 20:39:47.092037] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:02.958 [2024-12-12 20:39:47.092048] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:02.958 [2024-12-12 20:39:47.092057] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:29:02.958 [2024-12-12 20:39:47.092064] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:29:02.958 [2024-12-12 20:39:47.092072] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:29:02.958 [2024-12-12 20:39:47.092079] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:29:02.958 [2024-12-12 20:39:47.092087] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:29:02.958 [2024-12-12 20:39:47.092093] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:02.958 [2024-12-12 20:39:47.092103] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:29:02.958 [2024-12-12 20:39:47.092109] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:29:02.958 [2024-12-12 20:39:47.092118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:02.958 [2024-12-12 20:39:47.092125] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:29:02.958 [2024-12-12 20:39:47.092133] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:29:02.958 [2024-12-12 20:39:47.092140] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:02.958 [2024-12-12 20:39:47.092150] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:29:02.958 [2024-12-12 20:39:47.092157] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:29:02.958 [2024-12-12 20:39:47.092165] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:02.958 [2024-12-12 20:39:47.092172] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:29:02.958 [2024-12-12 20:39:47.092180] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:29:02.958 [2024-12-12 20:39:47.092186] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:02.958 [2024-12-12 20:39:47.092194] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:29:02.958 [2024-12-12 20:39:47.092200] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:29:02.958 [2024-12-12 20:39:47.092208] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:02.958 [2024-12-12 20:39:47.092214] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:29:02.958 [2024-12-12 20:39:47.092222] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:29:02.958 [2024-12-12 20:39:47.092229] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:02.958 [2024-12-12 20:39:47.092236] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:29:02.958 [2024-12-12 20:39:47.092243] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:29:02.958 [2024-12-12 20:39:47.092251] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:02.958 [2024-12-12 20:39:47.092257] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:29:02.958 [2024-12-12 20:39:47.092267] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:29:02.958 [2024-12-12 20:39:47.092273] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:02.958 [2024-12-12 20:39:47.092281] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:29:02.958 [2024-12-12 20:39:47.092287] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:29:02.958 [2024-12-12 20:39:47.092296] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:02.958 [2024-12-12 20:39:47.092303] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:29:02.958 [2024-12-12 20:39:47.092310] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:29:02.958 [2024-12-12 20:39:47.092316] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:02.958 [2024-12-12 20:39:47.092324] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:29:02.958 [2024-12-12 20:39:47.092330] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:29:02.958 [2024-12-12 20:39:47.092338] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:02.958 [2024-12-12 20:39:47.092344] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:29:02.958 [2024-12-12 20:39:47.092353] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:29:02.958 [2024-12-12 20:39:47.092360] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:02.958 [2024-12-12 20:39:47.092369] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:02.958 [2024-12-12 20:39:47.092376] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:29:02.958 [2024-12-12 20:39:47.092386] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:29:02.958 [2024-12-12 20:39:47.092394] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:29:02.958 [2024-12-12 20:39:47.092403] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:29:02.958 [2024-12-12 20:39:47.092409] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:29:02.958 [2024-12-12 20:39:47.092435] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:29:02.958 [2024-12-12 20:39:47.092444] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:29:02.958 [2024-12-12 20:39:47.092454] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:02.958 [2024-12-12 20:39:47.092465] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:29:02.958 [2024-12-12 20:39:47.092473] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:29:02.958 [2024-12-12 20:39:47.092480] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:29:02.958 [2024-12-12 20:39:47.092489] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:29:02.958 [2024-12-12 20:39:47.092496] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:29:02.958 [2024-12-12 20:39:47.092504] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:29:02.958 [2024-12-12 20:39:47.092511] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:29:02.958 [2024-12-12 20:39:47.092521] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:29:02.958 [2024-12-12 20:39:47.092528] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:29:02.958 [2024-12-12 20:39:47.092539] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:29:02.958 [2024-12-12 20:39:47.092546] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:29:02.958 [2024-12-12 20:39:47.092554] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:29:02.958 [2024-12-12 20:39:47.092561] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:29:02.958 [2024-12-12 20:39:47.092570] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:29:02.958 [2024-12-12 20:39:47.092577] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:29:02.958 [2024-12-12 20:39:47.092586] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:02.958 [2024-12-12 20:39:47.092594] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:02.958 [2024-12-12 20:39:47.092603] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:29:02.958 [2024-12-12 20:39:47.092610] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:29:02.958 [2024-12-12 20:39:47.092618] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:29:02.958 [2024-12-12 20:39:47.092626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.958 [2024-12-12 20:39:47.092635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:29:02.958 [2024-12-12 20:39:47.092642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.679 ms 00:29:02.958 [2024-12-12 20:39:47.092650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.958 [2024-12-12 20:39:47.092686] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:29:02.958 [2024-12-12 20:39:47.092699] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:29:06.241 [2024-12-12 20:39:50.104863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:06.241 [2024-12-12 20:39:50.104927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:29:06.241 [2024-12-12 20:39:50.104942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3012.167 ms 00:29:06.241 [2024-12-12 20:39:50.104952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:06.241 [2024-12-12 20:39:50.130052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:06.241 [2024-12-12 20:39:50.130100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:06.241 [2024-12-12 20:39:50.130113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.899 ms 00:29:06.241 [2024-12-12 20:39:50.130122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:06.241 [2024-12-12 20:39:50.130195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:06.241 [2024-12-12 20:39:50.130207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:29:06.241 [2024-12-12 20:39:50.130217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:29:06.241 [2024-12-12 20:39:50.130281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:06.241 [2024-12-12 20:39:50.160559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:06.241 [2024-12-12 20:39:50.160599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:06.241 [2024-12-12 20:39:50.160610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.243 ms 00:29:06.241 [2024-12-12 20:39:50.160619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:06.241 [2024-12-12 20:39:50.160649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:06.241 [2024-12-12 20:39:50.160662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:06.241 [2024-12-12 20:39:50.160671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:06.241 [2024-12-12 20:39:50.160680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:06.241 [2024-12-12 20:39:50.161017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:06.241 [2024-12-12 20:39:50.161035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:06.241 [2024-12-12 20:39:50.161049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.292 ms 00:29:06.241 [2024-12-12 20:39:50.161058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:06.241 [2024-12-12 20:39:50.161094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:06.241 [2024-12-12 20:39:50.161104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:06.241 [2024-12-12 20:39:50.161113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:29:06.241 [2024-12-12 20:39:50.161124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:06.241 [2024-12-12 20:39:50.174939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:06.241 [2024-12-12 20:39:50.174972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:06.241 [2024-12-12 20:39:50.174982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.798 ms 00:29:06.241 [2024-12-12 20:39:50.174991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:06.241 [2024-12-12 20:39:50.197325] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:29:06.241 [2024-12-12 20:39:50.198263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:06.241 [2024-12-12 20:39:50.198296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:29:06.241 [2024-12-12 20:39:50.198311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.202 ms 00:29:06.241 [2024-12-12 20:39:50.198320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:06.241 [2024-12-12 20:39:50.221658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:06.241 [2024-12-12 20:39:50.221691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:29:06.241 [2024-12-12 20:39:50.221706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.300 ms 00:29:06.241 [2024-12-12 20:39:50.221715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:06.241 [2024-12-12 20:39:50.221797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:06.241 [2024-12-12 20:39:50.221810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:29:06.241 [2024-12-12 20:39:50.221822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.045 ms 00:29:06.242 [2024-12-12 20:39:50.221830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:06.242 [2024-12-12 20:39:50.244870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:06.242 [2024-12-12 20:39:50.244999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:29:06.242 [2024-12-12 20:39:50.245020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.996 ms 00:29:06.242 [2024-12-12 20:39:50.245028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:06.242 [2024-12-12 20:39:50.268190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:06.242 [2024-12-12 20:39:50.268322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:29:06.242 [2024-12-12 20:39:50.268342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.863 ms 00:29:06.242 [2024-12-12 20:39:50.268351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:06.242 [2024-12-12 20:39:50.268966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:06.242 [2024-12-12 20:39:50.268987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:29:06.242 [2024-12-12 20:39:50.268999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.523 ms 00:29:06.242 [2024-12-12 20:39:50.269008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:06.242 [2024-12-12 20:39:50.341848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:06.242 [2024-12-12 20:39:50.341886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:29:06.242 [2024-12-12 20:39:50.341902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 72.803 ms 00:29:06.242 [2024-12-12 20:39:50.341910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:06.242 [2024-12-12 20:39:50.366639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:06.242 [2024-12-12 20:39:50.366794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:29:06.242 [2024-12-12 20:39:50.366815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.659 ms 00:29:06.242 [2024-12-12 20:39:50.366823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:06.242 [2024-12-12 20:39:50.391307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:06.242 [2024-12-12 20:39:50.391350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:29:06.242 [2024-12-12 20:39:50.391365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.217 ms 00:29:06.242 [2024-12-12 20:39:50.391373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:06.242 [2024-12-12 20:39:50.415054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:06.242 [2024-12-12 20:39:50.415085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:29:06.242 [2024-12-12 20:39:50.415099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.642 ms 00:29:06.242 [2024-12-12 20:39:50.415106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:06.242 [2024-12-12 20:39:50.415146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:06.242 [2024-12-12 20:39:50.415155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:29:06.242 [2024-12-12 20:39:50.415168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:29:06.242 [2024-12-12 20:39:50.415175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:06.242 [2024-12-12 20:39:50.415264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:06.242 [2024-12-12 20:39:50.415276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:29:06.242 [2024-12-12 20:39:50.415286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:29:06.242 [2024-12-12 20:39:50.415293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:06.242 [2024-12-12 20:39:50.416121] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3335.770 ms, result 0 00:29:06.242 { 00:29:06.242 "name": "ftl", 00:29:06.242 "uuid": "b25e1174-f384-4aa0-a287-d33b4249ab0a" 00:29:06.242 } 00:29:06.242 20:39:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:29:06.500 [2024-12-12 20:39:50.627553] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:06.500 20:39:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:29:06.759 20:39:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:29:07.017 [2024-12-12 20:39:51.035965] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:29:07.017 20:39:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:29:07.017 [2024-12-12 20:39:51.236334] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:07.274 20:39:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:29:07.589 20:39:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:29:07.589 20:39:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:29:07.589 20:39:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:29:07.589 20:39:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:29:07.589 20:39:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:29:07.589 20:39:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:29:07.589 20:39:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:29:07.589 20:39:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:29:07.589 20:39:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:29:07.589 20:39:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:29:07.589 20:39:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:29:07.589 Fill FTL, iteration 1 00:29:07.589 20:39:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:29:07.589 20:39:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:07.589 20:39:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:07.589 20:39:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:07.589 20:39:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:29:07.589 20:39:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=84182 00:29:07.589 20:39:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:29:07.589 20:39:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:29:07.589 20:39:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 84182 /var/tmp/spdk.tgt.sock 00:29:07.590 20:39:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84182 ']' 00:29:07.590 20:39:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:29:07.590 20:39:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:07.590 20:39:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:29:07.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:29:07.590 20:39:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:07.590 20:39:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:07.590 [2024-12-12 20:39:51.660726] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:29:07.590 [2024-12-12 20:39:51.661010] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84182 ] 00:29:07.590 [2024-12-12 20:39:51.816694] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:07.847 [2024-12-12 20:39:51.916591] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:08.414 20:39:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:08.414 20:39:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:29:08.414 20:39:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:29:08.672 ftln1 00:29:08.672 20:39:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:29:08.672 20:39:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:29:08.930 20:39:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:29:08.930 20:39:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 84182 00:29:08.930 20:39:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 84182 ']' 00:29:08.930 20:39:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 84182 00:29:08.930 20:39:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:29:08.930 20:39:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:08.930 20:39:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84182 00:29:08.930 killing process with pid 84182 00:29:08.930 20:39:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:08.930 20:39:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:08.930 20:39:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84182' 00:29:08.930 20:39:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 84182 00:29:08.930 20:39:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 84182 00:29:10.304 20:39:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:29:10.304 20:39:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:29:10.304 [2024-12-12 20:39:54.520076] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:29:10.304 [2024-12-12 20:39:54.520183] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84228 ] 00:29:10.561 [2024-12-12 20:39:54.679960] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:10.561 [2024-12-12 20:39:54.779140] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:11.934  [2024-12-12T20:39:57.536Z] Copying: 192/1024 [MB] (192 MBps) [2024-12-12T20:39:58.469Z] Copying: 450/1024 [MB] (258 MBps) [2024-12-12T20:39:59.436Z] Copying: 700/1024 [MB] (250 MBps) [2024-12-12T20:39:59.436Z] Copying: 958/1024 [MB] (258 MBps) [2024-12-12T20:40:00.001Z] Copying: 1024/1024 [MB] (average 240 MBps) 00:29:15.773 00:29:15.773 Calculate MD5 checksum, iteration 1 00:29:15.773 20:39:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:29:15.773 20:39:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:29:15.773 20:39:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:15.773 20:39:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:15.773 20:39:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:15.773 20:39:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:15.773 20:39:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:15.773 20:39:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:16.030 [2024-12-12 20:40:00.052320] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:29:16.030 [2024-12-12 20:40:00.052503] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84284 ] 00:29:16.030 [2024-12-12 20:40:00.206328] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:16.288 [2024-12-12 20:40:00.282855] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:17.661  [2024-12-12T20:40:02.454Z] Copying: 621/1024 [MB] (621 MBps) [2024-12-12T20:40:02.712Z] Copying: 1024/1024 [MB] (average 641 MBps) 00:29:18.484 00:29:18.484 20:40:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:29:18.484 20:40:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:21.017 20:40:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:29:21.017 20:40:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=6abb9b47131d540a997d42be796bce7c 00:29:21.017 20:40:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:29:21.017 20:40:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:29:21.017 20:40:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:29:21.017 Fill FTL, iteration 2 00:29:21.017 20:40:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:29:21.017 20:40:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:21.018 20:40:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:21.018 20:40:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:21.018 20:40:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:21.018 20:40:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:29:21.018 [2024-12-12 20:40:04.866472] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:29:21.018 [2024-12-12 20:40:04.866576] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84341 ] 00:29:21.018 [2024-12-12 20:40:05.022837] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:21.018 [2024-12-12 20:40:05.104101] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:22.428  [2024-12-12T20:40:07.589Z] Copying: 254/1024 [MB] (254 MBps) [2024-12-12T20:40:08.522Z] Copying: 510/1024 [MB] (256 MBps) [2024-12-12T20:40:09.458Z] Copying: 768/1024 [MB] (258 MBps) [2024-12-12T20:40:10.023Z] Copying: 1024/1024 [MB] (average 256 MBps) 00:29:25.795 00:29:25.795 Calculate MD5 checksum, iteration 2 00:29:25.795 20:40:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:29:25.795 20:40:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:29:25.795 20:40:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:25.796 20:40:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:25.796 20:40:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:25.796 20:40:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:25.796 20:40:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:25.796 20:40:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:26.052 [2024-12-12 20:40:10.051969] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:29:26.053 [2024-12-12 20:40:10.052086] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84399 ] 00:29:26.053 [2024-12-12 20:40:10.207795] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:26.309 [2024-12-12 20:40:10.285656] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:27.682  [2024-12-12T20:40:12.475Z] Copying: 669/1024 [MB] (669 MBps) [2024-12-12T20:40:13.408Z] Copying: 1024/1024 [MB] (average 670 MBps) 00:29:29.180 00:29:29.180 20:40:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:29:29.180 20:40:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:31.080 20:40:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:29:31.080 20:40:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=b9cec5914ef7d70e0fa6987cfc51c6c5 00:29:31.080 20:40:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:29:31.080 20:40:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:29:31.080 20:40:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:29:31.080 [2024-12-12 20:40:15.269845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:31.080 [2024-12-12 20:40:15.269882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:29:31.080 [2024-12-12 20:40:15.269894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:29:31.080 [2024-12-12 20:40:15.269900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:31.080 [2024-12-12 20:40:15.269918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:31.080 [2024-12-12 20:40:15.269927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:29:31.080 [2024-12-12 20:40:15.269934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:31.080 [2024-12-12 20:40:15.269940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:31.080 [2024-12-12 20:40:15.269955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:31.080 [2024-12-12 20:40:15.269962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:29:31.080 [2024-12-12 20:40:15.269968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:31.080 [2024-12-12 20:40:15.269973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:31.080 [2024-12-12 20:40:15.270020] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.169 ms, result 0 00:29:31.080 true 00:29:31.080 20:40:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:31.338 { 00:29:31.338 "name": "ftl", 00:29:31.338 "properties": [ 00:29:31.338 { 00:29:31.338 "name": "superblock_version", 00:29:31.338 "value": 5, 00:29:31.338 "read-only": true 00:29:31.338 }, 00:29:31.338 { 00:29:31.338 "name": "base_device", 00:29:31.338 "bands": [ 00:29:31.338 { 00:29:31.338 "id": 0, 00:29:31.338 "state": "FREE", 00:29:31.338 "validity": 0.0 00:29:31.338 }, 00:29:31.338 { 00:29:31.338 "id": 1, 00:29:31.338 "state": "FREE", 00:29:31.338 "validity": 0.0 00:29:31.338 }, 00:29:31.338 { 00:29:31.338 "id": 2, 00:29:31.338 "state": "FREE", 00:29:31.338 "validity": 0.0 00:29:31.338 }, 00:29:31.338 { 00:29:31.338 "id": 3, 00:29:31.338 "state": "FREE", 00:29:31.338 "validity": 0.0 00:29:31.338 }, 00:29:31.338 { 00:29:31.338 "id": 4, 00:29:31.338 "state": "FREE", 00:29:31.338 "validity": 0.0 00:29:31.338 }, 00:29:31.338 { 00:29:31.338 "id": 5, 00:29:31.338 "state": "FREE", 00:29:31.338 "validity": 0.0 00:29:31.338 }, 00:29:31.338 { 00:29:31.338 "id": 6, 00:29:31.338 "state": "FREE", 00:29:31.338 "validity": 0.0 00:29:31.338 }, 00:29:31.338 { 00:29:31.338 "id": 7, 00:29:31.338 "state": "FREE", 00:29:31.338 "validity": 0.0 00:29:31.338 }, 00:29:31.338 { 00:29:31.338 "id": 8, 00:29:31.338 "state": "FREE", 00:29:31.338 "validity": 0.0 00:29:31.338 }, 00:29:31.338 { 00:29:31.338 "id": 9, 00:29:31.338 "state": "FREE", 00:29:31.338 "validity": 0.0 00:29:31.338 }, 00:29:31.338 { 00:29:31.338 "id": 10, 00:29:31.338 "state": "FREE", 00:29:31.338 "validity": 0.0 00:29:31.338 }, 00:29:31.338 { 00:29:31.338 "id": 11, 00:29:31.338 "state": "FREE", 00:29:31.338 "validity": 0.0 00:29:31.338 }, 00:29:31.338 { 00:29:31.338 "id": 12, 00:29:31.338 "state": "FREE", 00:29:31.338 "validity": 0.0 00:29:31.338 }, 00:29:31.338 { 00:29:31.338 "id": 13, 00:29:31.338 "state": "FREE", 00:29:31.338 "validity": 0.0 00:29:31.338 }, 00:29:31.338 { 00:29:31.338 "id": 14, 00:29:31.338 "state": "FREE", 00:29:31.338 "validity": 0.0 00:29:31.338 }, 00:29:31.338 { 00:29:31.338 "id": 15, 00:29:31.338 "state": "FREE", 00:29:31.338 "validity": 0.0 00:29:31.338 }, 00:29:31.338 { 00:29:31.338 "id": 16, 00:29:31.338 "state": "FREE", 00:29:31.338 "validity": 0.0 00:29:31.338 }, 00:29:31.338 { 00:29:31.338 "id": 17, 00:29:31.338 "state": "FREE", 00:29:31.338 "validity": 0.0 00:29:31.338 } 00:29:31.338 ], 00:29:31.338 "read-only": true 00:29:31.338 }, 00:29:31.338 { 00:29:31.338 "name": "cache_device", 00:29:31.338 "type": "bdev", 00:29:31.338 "chunks": [ 00:29:31.338 { 00:29:31.338 "id": 0, 00:29:31.338 "state": "INACTIVE", 00:29:31.338 "utilization": 0.0 00:29:31.338 }, 00:29:31.338 { 00:29:31.338 "id": 1, 00:29:31.338 "state": "CLOSED", 00:29:31.338 "utilization": 1.0 00:29:31.338 }, 00:29:31.338 { 00:29:31.338 "id": 2, 00:29:31.338 "state": "CLOSED", 00:29:31.338 "utilization": 1.0 00:29:31.338 }, 00:29:31.338 { 00:29:31.338 "id": 3, 00:29:31.338 "state": "OPEN", 00:29:31.338 "utilization": 0.001953125 00:29:31.338 }, 00:29:31.338 { 00:29:31.338 "id": 4, 00:29:31.338 "state": "OPEN", 00:29:31.338 "utilization": 0.0 00:29:31.338 } 00:29:31.338 ], 00:29:31.338 "read-only": true 00:29:31.338 }, 00:29:31.338 { 00:29:31.338 "name": "verbose_mode", 00:29:31.338 "value": true, 00:29:31.338 "unit": "", 00:29:31.338 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:29:31.338 }, 00:29:31.338 { 00:29:31.338 "name": "prep_upgrade_on_shutdown", 00:29:31.338 "value": false, 00:29:31.338 "unit": "", 00:29:31.338 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:29:31.338 } 00:29:31.338 ] 00:29:31.338 } 00:29:31.339 20:40:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:29:31.597 [2024-12-12 20:40:15.585857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:31.597 [2024-12-12 20:40:15.585898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:29:31.597 [2024-12-12 20:40:15.585909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:31.597 [2024-12-12 20:40:15.585915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:31.597 [2024-12-12 20:40:15.585933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:31.597 [2024-12-12 20:40:15.585940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:29:31.597 [2024-12-12 20:40:15.585946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:31.597 [2024-12-12 20:40:15.585951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:31.597 [2024-12-12 20:40:15.585965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:31.597 [2024-12-12 20:40:15.585971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:29:31.597 [2024-12-12 20:40:15.585977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:31.597 [2024-12-12 20:40:15.585982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:31.597 [2024-12-12 20:40:15.586026] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.166 ms, result 0 00:29:31.597 true 00:29:31.597 20:40:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:29:31.597 20:40:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:29:31.597 20:40:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:31.597 20:40:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:29:31.597 20:40:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:29:31.597 20:40:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:29:31.855 [2024-12-12 20:40:16.002190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:31.855 [2024-12-12 20:40:16.002308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:29:31.855 [2024-12-12 20:40:16.002352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:31.855 [2024-12-12 20:40:16.002369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:31.855 [2024-12-12 20:40:16.002402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:31.855 [2024-12-12 20:40:16.002435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:29:31.855 [2024-12-12 20:40:16.002450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:31.855 [2024-12-12 20:40:16.002465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:31.855 [2024-12-12 20:40:16.002488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:31.855 [2024-12-12 20:40:16.002504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:29:31.855 [2024-12-12 20:40:16.002520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:31.855 [2024-12-12 20:40:16.002558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:31.855 [2024-12-12 20:40:16.002620] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.417 ms, result 0 00:29:31.855 true 00:29:31.855 20:40:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:32.114 { 00:29:32.114 "name": "ftl", 00:29:32.114 "properties": [ 00:29:32.114 { 00:29:32.114 "name": "superblock_version", 00:29:32.114 "value": 5, 00:29:32.114 "read-only": true 00:29:32.114 }, 00:29:32.114 { 00:29:32.114 "name": "base_device", 00:29:32.114 "bands": [ 00:29:32.114 { 00:29:32.114 "id": 0, 00:29:32.114 "state": "FREE", 00:29:32.114 "validity": 0.0 00:29:32.114 }, 00:29:32.114 { 00:29:32.114 "id": 1, 00:29:32.114 "state": "FREE", 00:29:32.114 "validity": 0.0 00:29:32.114 }, 00:29:32.114 { 00:29:32.114 "id": 2, 00:29:32.114 "state": "FREE", 00:29:32.114 "validity": 0.0 00:29:32.114 }, 00:29:32.114 { 00:29:32.114 "id": 3, 00:29:32.114 "state": "FREE", 00:29:32.114 "validity": 0.0 00:29:32.114 }, 00:29:32.114 { 00:29:32.114 "id": 4, 00:29:32.114 "state": "FREE", 00:29:32.114 "validity": 0.0 00:29:32.114 }, 00:29:32.114 { 00:29:32.114 "id": 5, 00:29:32.114 "state": "FREE", 00:29:32.114 "validity": 0.0 00:29:32.114 }, 00:29:32.114 { 00:29:32.114 "id": 6, 00:29:32.114 "state": "FREE", 00:29:32.114 "validity": 0.0 00:29:32.114 }, 00:29:32.114 { 00:29:32.114 "id": 7, 00:29:32.114 "state": "FREE", 00:29:32.114 "validity": 0.0 00:29:32.114 }, 00:29:32.114 { 00:29:32.114 "id": 8, 00:29:32.114 "state": "FREE", 00:29:32.114 "validity": 0.0 00:29:32.114 }, 00:29:32.114 { 00:29:32.114 "id": 9, 00:29:32.114 "state": "FREE", 00:29:32.114 "validity": 0.0 00:29:32.114 }, 00:29:32.114 { 00:29:32.114 "id": 10, 00:29:32.114 "state": "FREE", 00:29:32.114 "validity": 0.0 00:29:32.114 }, 00:29:32.114 { 00:29:32.114 "id": 11, 00:29:32.114 "state": "FREE", 00:29:32.114 "validity": 0.0 00:29:32.114 }, 00:29:32.114 { 00:29:32.114 "id": 12, 00:29:32.114 "state": "FREE", 00:29:32.114 "validity": 0.0 00:29:32.114 }, 00:29:32.114 { 00:29:32.114 "id": 13, 00:29:32.114 "state": "FREE", 00:29:32.114 "validity": 0.0 00:29:32.114 }, 00:29:32.114 { 00:29:32.114 "id": 14, 00:29:32.114 "state": "FREE", 00:29:32.114 "validity": 0.0 00:29:32.114 }, 00:29:32.114 { 00:29:32.114 "id": 15, 00:29:32.114 "state": "FREE", 00:29:32.114 "validity": 0.0 00:29:32.114 }, 00:29:32.114 { 00:29:32.114 "id": 16, 00:29:32.114 "state": "FREE", 00:29:32.115 "validity": 0.0 00:29:32.115 }, 00:29:32.115 { 00:29:32.115 "id": 17, 00:29:32.115 "state": "FREE", 00:29:32.115 "validity": 0.0 00:29:32.115 } 00:29:32.115 ], 00:29:32.115 "read-only": true 00:29:32.115 }, 00:29:32.115 { 00:29:32.115 "name": "cache_device", 00:29:32.115 "type": "bdev", 00:29:32.115 "chunks": [ 00:29:32.115 { 00:29:32.115 "id": 0, 00:29:32.115 "state": "INACTIVE", 00:29:32.115 "utilization": 0.0 00:29:32.115 }, 00:29:32.115 { 00:29:32.115 "id": 1, 00:29:32.115 "state": "CLOSED", 00:29:32.115 "utilization": 1.0 00:29:32.115 }, 00:29:32.115 { 00:29:32.115 "id": 2, 00:29:32.115 "state": "CLOSED", 00:29:32.115 "utilization": 1.0 00:29:32.115 }, 00:29:32.115 { 00:29:32.115 "id": 3, 00:29:32.115 "state": "OPEN", 00:29:32.115 "utilization": 0.001953125 00:29:32.115 }, 00:29:32.115 { 00:29:32.115 "id": 4, 00:29:32.115 "state": "OPEN", 00:29:32.115 "utilization": 0.0 00:29:32.115 } 00:29:32.115 ], 00:29:32.115 "read-only": true 00:29:32.115 }, 00:29:32.115 { 00:29:32.115 "name": "verbose_mode", 00:29:32.115 "value": true, 00:29:32.115 "unit": "", 00:29:32.115 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:29:32.115 }, 00:29:32.115 { 00:29:32.115 "name": "prep_upgrade_on_shutdown", 00:29:32.115 "value": true, 00:29:32.115 "unit": "", 00:29:32.115 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:29:32.115 } 00:29:32.115 ] 00:29:32.115 } 00:29:32.115 20:40:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:29:32.115 20:40:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 84070 ]] 00:29:32.115 20:40:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 84070 00:29:32.115 20:40:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 84070 ']' 00:29:32.115 20:40:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 84070 00:29:32.115 20:40:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:29:32.115 20:40:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:32.115 20:40:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84070 00:29:32.115 killing process with pid 84070 00:29:32.115 20:40:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:32.115 20:40:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:32.115 20:40:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84070' 00:29:32.115 20:40:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 84070 00:29:32.115 20:40:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 84070 00:29:32.680 [2024-12-12 20:40:16.796111] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:29:32.680 [2024-12-12 20:40:16.805717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:32.680 [2024-12-12 20:40:16.805750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:29:32.680 [2024-12-12 20:40:16.805760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:32.680 [2024-12-12 20:40:16.805766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:32.680 [2024-12-12 20:40:16.805783] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:29:32.680 [2024-12-12 20:40:16.807915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:32.680 [2024-12-12 20:40:16.807938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:29:32.680 [2024-12-12 20:40:16.807946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.120 ms 00:29:32.680 [2024-12-12 20:40:16.807955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:40.788 [2024-12-12 20:40:24.208929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:40.788 [2024-12-12 20:40:24.209080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:29:40.788 [2024-12-12 20:40:24.209101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7400.936 ms 00:29:40.788 [2024-12-12 20:40:24.209107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:40.788 [2024-12-12 20:40:24.210038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:40.788 [2024-12-12 20:40:24.210051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:29:40.788 [2024-12-12 20:40:24.210059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.916 ms 00:29:40.788 [2024-12-12 20:40:24.210065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:40.788 [2024-12-12 20:40:24.210919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:40.788 [2024-12-12 20:40:24.210938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:29:40.788 [2024-12-12 20:40:24.210945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.835 ms 00:29:40.788 [2024-12-12 20:40:24.210955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:40.788 [2024-12-12 20:40:24.218441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:40.788 [2024-12-12 20:40:24.218466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:29:40.788 [2024-12-12 20:40:24.218472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.461 ms 00:29:40.788 [2024-12-12 20:40:24.218478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:40.788 [2024-12-12 20:40:24.222978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:40.788 [2024-12-12 20:40:24.223003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:29:40.788 [2024-12-12 20:40:24.223011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.474 ms 00:29:40.788 [2024-12-12 20:40:24.223018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:40.788 [2024-12-12 20:40:24.223083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:40.789 [2024-12-12 20:40:24.223095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:29:40.789 [2024-12-12 20:40:24.223101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:29:40.789 [2024-12-12 20:40:24.223107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:40.789 [2024-12-12 20:40:24.230140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:40.789 [2024-12-12 20:40:24.230165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:29:40.789 [2024-12-12 20:40:24.230172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.022 ms 00:29:40.789 [2024-12-12 20:40:24.230178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:40.789 [2024-12-12 20:40:24.237151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:40.789 [2024-12-12 20:40:24.237175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:29:40.789 [2024-12-12 20:40:24.237181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.950 ms 00:29:40.789 [2024-12-12 20:40:24.237187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:40.789 [2024-12-12 20:40:24.244008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:40.789 [2024-12-12 20:40:24.244033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:29:40.789 [2024-12-12 20:40:24.244039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.797 ms 00:29:40.789 [2024-12-12 20:40:24.244044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:40.789 [2024-12-12 20:40:24.250943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:40.789 [2024-12-12 20:40:24.251047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:29:40.789 [2024-12-12 20:40:24.251060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.853 ms 00:29:40.789 [2024-12-12 20:40:24.251065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:40.789 [2024-12-12 20:40:24.251088] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:29:40.789 [2024-12-12 20:40:24.251104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:40.789 [2024-12-12 20:40:24.251112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:29:40.789 [2024-12-12 20:40:24.251118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:29:40.789 [2024-12-12 20:40:24.251125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:40.789 [2024-12-12 20:40:24.251131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:40.789 [2024-12-12 20:40:24.251137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:40.789 [2024-12-12 20:40:24.251143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:40.789 [2024-12-12 20:40:24.251149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:40.789 [2024-12-12 20:40:24.251155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:40.789 [2024-12-12 20:40:24.251160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:40.789 [2024-12-12 20:40:24.251166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:40.789 [2024-12-12 20:40:24.251172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:40.789 [2024-12-12 20:40:24.251177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:40.789 [2024-12-12 20:40:24.251183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:40.789 [2024-12-12 20:40:24.251188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:40.789 [2024-12-12 20:40:24.251194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:40.789 [2024-12-12 20:40:24.251200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:40.789 [2024-12-12 20:40:24.251205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:40.789 [2024-12-12 20:40:24.251213] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:29:40.789 [2024-12-12 20:40:24.251218] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: b25e1174-f384-4aa0-a287-d33b4249ab0a 00:29:40.789 [2024-12-12 20:40:24.251224] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:29:40.789 [2024-12-12 20:40:24.251230] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:29:40.789 [2024-12-12 20:40:24.251235] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:29:40.789 [2024-12-12 20:40:24.251241] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:29:40.789 [2024-12-12 20:40:24.251248] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:29:40.789 [2024-12-12 20:40:24.251253] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:29:40.789 [2024-12-12 20:40:24.251261] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:29:40.789 [2024-12-12 20:40:24.251269] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:29:40.789 [2024-12-12 20:40:24.251274] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:29:40.789 [2024-12-12 20:40:24.251281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:40.789 [2024-12-12 20:40:24.251287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:29:40.789 [2024-12-12 20:40:24.251293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.194 ms 00:29:40.789 [2024-12-12 20:40:24.251299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:40.789 [2024-12-12 20:40:24.261067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:40.789 [2024-12-12 20:40:24.261089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:29:40.789 [2024-12-12 20:40:24.261101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.756 ms 00:29:40.789 [2024-12-12 20:40:24.261107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:40.789 [2024-12-12 20:40:24.261380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:40.789 [2024-12-12 20:40:24.261387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:29:40.789 [2024-12-12 20:40:24.261394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.258 ms 00:29:40.789 [2024-12-12 20:40:24.261399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:40.789 [2024-12-12 20:40:24.294721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:40.789 [2024-12-12 20:40:24.294753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:40.789 [2024-12-12 20:40:24.294761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:40.789 [2024-12-12 20:40:24.294767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:40.789 [2024-12-12 20:40:24.294790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:40.789 [2024-12-12 20:40:24.294797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:40.789 [2024-12-12 20:40:24.294803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:40.789 [2024-12-12 20:40:24.294808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:40.789 [2024-12-12 20:40:24.294857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:40.789 [2024-12-12 20:40:24.294864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:40.789 [2024-12-12 20:40:24.294873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:40.789 [2024-12-12 20:40:24.294880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:40.789 [2024-12-12 20:40:24.294892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:40.789 [2024-12-12 20:40:24.294898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:40.789 [2024-12-12 20:40:24.294904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:40.789 [2024-12-12 20:40:24.294909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:40.789 [2024-12-12 20:40:24.354519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:40.789 [2024-12-12 20:40:24.354550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:40.789 [2024-12-12 20:40:24.354562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:40.789 [2024-12-12 20:40:24.354568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:40.789 [2024-12-12 20:40:24.403430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:40.789 [2024-12-12 20:40:24.403461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:40.789 [2024-12-12 20:40:24.403469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:40.789 [2024-12-12 20:40:24.403476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:40.789 [2024-12-12 20:40:24.403526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:40.789 [2024-12-12 20:40:24.403533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:40.789 [2024-12-12 20:40:24.403540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:40.789 [2024-12-12 20:40:24.403546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:40.789 [2024-12-12 20:40:24.403591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:40.789 [2024-12-12 20:40:24.403599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:40.789 [2024-12-12 20:40:24.403605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:40.789 [2024-12-12 20:40:24.403611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:40.789 [2024-12-12 20:40:24.403676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:40.789 [2024-12-12 20:40:24.403684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:40.789 [2024-12-12 20:40:24.403690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:40.789 [2024-12-12 20:40:24.403696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:40.789 [2024-12-12 20:40:24.403721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:40.789 [2024-12-12 20:40:24.403728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:29:40.789 [2024-12-12 20:40:24.403734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:40.789 [2024-12-12 20:40:24.403740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:40.789 [2024-12-12 20:40:24.403770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:40.789 [2024-12-12 20:40:24.403777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:40.789 [2024-12-12 20:40:24.403783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:40.789 [2024-12-12 20:40:24.403789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:40.789 [2024-12-12 20:40:24.403825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:40.789 [2024-12-12 20:40:24.403832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:40.789 [2024-12-12 20:40:24.403838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:40.790 [2024-12-12 20:40:24.403843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:40.790 [2024-12-12 20:40:24.403935] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 7598.176 ms, result 0 00:29:47.346 20:40:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:29:47.346 20:40:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:29:47.346 20:40:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:29:47.346 20:40:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:29:47.346 20:40:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:47.346 20:40:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84576 00:29:47.346 20:40:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:29:47.346 20:40:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84576 00:29:47.346 20:40:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:47.346 20:40:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84576 ']' 00:29:47.346 20:40:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:47.346 20:40:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:47.346 20:40:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:47.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:47.346 20:40:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:47.346 20:40:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:47.346 [2024-12-12 20:40:30.633581] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:29:47.346 [2024-12-12 20:40:30.633877] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84576 ] 00:29:47.346 [2024-12-12 20:40:30.793740] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:47.346 [2024-12-12 20:40:30.891814] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:47.606 [2024-12-12 20:40:31.620051] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:29:47.606 [2024-12-12 20:40:31.620293] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:29:47.606 [2024-12-12 20:40:31.772548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:47.606 [2024-12-12 20:40:31.772591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:29:47.606 [2024-12-12 20:40:31.772604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:47.606 [2024-12-12 20:40:31.772612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:47.606 [2024-12-12 20:40:31.772665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:47.606 [2024-12-12 20:40:31.772676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:47.606 [2024-12-12 20:40:31.772684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:29:47.606 [2024-12-12 20:40:31.772691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:47.606 [2024-12-12 20:40:31.772714] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:29:47.606 [2024-12-12 20:40:31.773468] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:29:47.606 [2024-12-12 20:40:31.773485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:47.606 [2024-12-12 20:40:31.773514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:47.606 [2024-12-12 20:40:31.773522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.780 ms 00:29:47.606 [2024-12-12 20:40:31.773530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:47.606 [2024-12-12 20:40:31.774638] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:29:47.606 [2024-12-12 20:40:31.787186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:47.606 [2024-12-12 20:40:31.787221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:29:47.606 [2024-12-12 20:40:31.787236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.550 ms 00:29:47.606 [2024-12-12 20:40:31.787244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:47.606 [2024-12-12 20:40:31.787299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:47.606 [2024-12-12 20:40:31.787308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:29:47.606 [2024-12-12 20:40:31.787316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:29:47.606 [2024-12-12 20:40:31.787323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:47.606 [2024-12-12 20:40:31.792192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:47.606 [2024-12-12 20:40:31.792222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:47.606 [2024-12-12 20:40:31.792231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.808 ms 00:29:47.606 [2024-12-12 20:40:31.792239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:47.606 [2024-12-12 20:40:31.792295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:47.606 [2024-12-12 20:40:31.792305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:47.606 [2024-12-12 20:40:31.792316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:29:47.606 [2024-12-12 20:40:31.792324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:47.606 [2024-12-12 20:40:31.792362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:47.606 [2024-12-12 20:40:31.792374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:29:47.606 [2024-12-12 20:40:31.792382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:29:47.606 [2024-12-12 20:40:31.792389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:47.606 [2024-12-12 20:40:31.792409] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:29:47.606 [2024-12-12 20:40:31.795806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:47.606 [2024-12-12 20:40:31.795834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:47.606 [2024-12-12 20:40:31.795843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.402 ms 00:29:47.606 [2024-12-12 20:40:31.795853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:47.606 [2024-12-12 20:40:31.795881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:47.606 [2024-12-12 20:40:31.795890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:29:47.606 [2024-12-12 20:40:31.795897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:47.606 [2024-12-12 20:40:31.795904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:47.606 [2024-12-12 20:40:31.795924] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:29:47.606 [2024-12-12 20:40:31.795993] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:29:47.606 [2024-12-12 20:40:31.796028] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:29:47.606 [2024-12-12 20:40:31.796043] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:29:47.606 [2024-12-12 20:40:31.796148] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:29:47.606 [2024-12-12 20:40:31.796158] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:29:47.606 [2024-12-12 20:40:31.796169] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:29:47.606 [2024-12-12 20:40:31.796178] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:29:47.606 [2024-12-12 20:40:31.796187] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:29:47.606 [2024-12-12 20:40:31.796197] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:29:47.606 [2024-12-12 20:40:31.796204] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:29:47.606 [2024-12-12 20:40:31.796211] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:29:47.606 [2024-12-12 20:40:31.796218] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:29:47.606 [2024-12-12 20:40:31.796225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:47.606 [2024-12-12 20:40:31.796233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:29:47.606 [2024-12-12 20:40:31.796240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.304 ms 00:29:47.606 [2024-12-12 20:40:31.796247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:47.606 [2024-12-12 20:40:31.796331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:47.606 [2024-12-12 20:40:31.796339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:29:47.606 [2024-12-12 20:40:31.796348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:29:47.606 [2024-12-12 20:40:31.796355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:47.606 [2024-12-12 20:40:31.796616] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:29:47.606 [2024-12-12 20:40:31.796652] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:29:47.606 [2024-12-12 20:40:31.796672] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:47.606 [2024-12-12 20:40:31.796739] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:47.606 [2024-12-12 20:40:31.796762] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:29:47.606 [2024-12-12 20:40:31.796780] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:29:47.606 [2024-12-12 20:40:31.796829] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:29:47.606 [2024-12-12 20:40:31.796850] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:29:47.606 [2024-12-12 20:40:31.796868] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:29:47.606 [2024-12-12 20:40:31.796886] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:47.606 [2024-12-12 20:40:31.796938] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:29:47.606 [2024-12-12 20:40:31.796960] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:29:47.606 [2024-12-12 20:40:31.796978] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:47.606 [2024-12-12 20:40:31.796996] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:29:47.606 [2024-12-12 20:40:31.797013] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:29:47.606 [2024-12-12 20:40:31.797060] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:47.606 [2024-12-12 20:40:31.797082] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:29:47.606 [2024-12-12 20:40:31.797644] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:29:47.606 [2024-12-12 20:40:31.797709] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:47.606 [2024-12-12 20:40:31.797738] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:29:47.606 [2024-12-12 20:40:31.797761] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:29:47.606 [2024-12-12 20:40:31.797783] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:47.606 [2024-12-12 20:40:31.797804] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:29:47.607 [2024-12-12 20:40:31.797848] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:29:47.607 [2024-12-12 20:40:31.797869] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:47.607 [2024-12-12 20:40:31.797890] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:29:47.607 [2024-12-12 20:40:31.797911] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:29:47.607 [2024-12-12 20:40:31.797931] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:47.607 [2024-12-12 20:40:31.797952] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:29:47.607 [2024-12-12 20:40:31.797973] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:29:47.607 [2024-12-12 20:40:31.797993] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:47.607 [2024-12-12 20:40:31.798015] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:29:47.607 [2024-12-12 20:40:31.798035] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:29:47.607 [2024-12-12 20:40:31.798055] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:47.607 [2024-12-12 20:40:31.798076] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:29:47.607 [2024-12-12 20:40:31.798097] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:29:47.607 [2024-12-12 20:40:31.798117] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:47.607 [2024-12-12 20:40:31.798138] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:29:47.607 [2024-12-12 20:40:31.798158] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:29:47.607 [2024-12-12 20:40:31.798179] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:47.607 [2024-12-12 20:40:31.798200] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:29:47.607 [2024-12-12 20:40:31.798220] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:29:47.607 [2024-12-12 20:40:31.798240] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:47.607 [2024-12-12 20:40:31.798261] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:29:47.607 [2024-12-12 20:40:31.798285] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:29:47.607 [2024-12-12 20:40:31.798308] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:47.607 [2024-12-12 20:40:31.798329] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:47.607 [2024-12-12 20:40:31.798359] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:29:47.607 [2024-12-12 20:40:31.798381] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:29:47.607 [2024-12-12 20:40:31.798401] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:29:47.607 [2024-12-12 20:40:31.798447] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:29:47.607 [2024-12-12 20:40:31.798469] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:29:47.607 [2024-12-12 20:40:31.798490] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:29:47.607 [2024-12-12 20:40:31.798517] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:29:47.607 [2024-12-12 20:40:31.798547] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:47.607 [2024-12-12 20:40:31.798574] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:29:47.607 [2024-12-12 20:40:31.798597] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:29:47.607 [2024-12-12 20:40:31.798620] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:29:47.607 [2024-12-12 20:40:31.798642] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:29:47.607 [2024-12-12 20:40:31.798665] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:29:47.607 [2024-12-12 20:40:31.798687] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:29:47.607 [2024-12-12 20:40:31.798710] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:29:47.607 [2024-12-12 20:40:31.798734] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:29:47.607 [2024-12-12 20:40:31.798757] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:29:47.607 [2024-12-12 20:40:31.798779] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:29:47.607 [2024-12-12 20:40:31.798802] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:29:47.607 [2024-12-12 20:40:31.798836] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:29:47.607 [2024-12-12 20:40:31.798858] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:29:47.607 [2024-12-12 20:40:31.798882] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:29:47.607 [2024-12-12 20:40:31.798904] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:29:47.607 [2024-12-12 20:40:31.798930] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:47.607 [2024-12-12 20:40:31.798954] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:47.607 [2024-12-12 20:40:31.798976] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:29:47.607 [2024-12-12 20:40:31.798999] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:29:47.607 [2024-12-12 20:40:31.799021] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:29:47.607 [2024-12-12 20:40:31.799059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:47.607 [2024-12-12 20:40:31.799093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:29:47.607 [2024-12-12 20:40:31.799118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.659 ms 00:29:47.607 [2024-12-12 20:40:31.799140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:47.607 [2024-12-12 20:40:31.799339] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:29:47.607 [2024-12-12 20:40:31.799388] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:29:50.931 [2024-12-12 20:40:34.774722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.931 [2024-12-12 20:40:34.774920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:29:50.931 [2024-12-12 20:40:34.774987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2975.378 ms 00:29:50.931 [2024-12-12 20:40:34.775011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.931 [2024-12-12 20:40:34.799964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.931 [2024-12-12 20:40:34.800102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:50.931 [2024-12-12 20:40:34.800158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.748 ms 00:29:50.931 [2024-12-12 20:40:34.800181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.931 [2024-12-12 20:40:34.800281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.931 [2024-12-12 20:40:34.800313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:29:50.931 [2024-12-12 20:40:34.800335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:29:50.931 [2024-12-12 20:40:34.800395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.931 [2024-12-12 20:40:34.830731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.931 [2024-12-12 20:40:34.830853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:50.931 [2024-12-12 20:40:34.830906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.256 ms 00:29:50.931 [2024-12-12 20:40:34.830933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.931 [2024-12-12 20:40:34.830982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.931 [2024-12-12 20:40:34.831003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:50.931 [2024-12-12 20:40:34.831023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:50.931 [2024-12-12 20:40:34.831042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.931 [2024-12-12 20:40:34.831389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.931 [2024-12-12 20:40:34.831450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:50.931 [2024-12-12 20:40:34.831584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.290 ms 00:29:50.931 [2024-12-12 20:40:34.831608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.931 [2024-12-12 20:40:34.831669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.931 [2024-12-12 20:40:34.831691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:50.931 [2024-12-12 20:40:34.831711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:29:50.931 [2024-12-12 20:40:34.831730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.931 [2024-12-12 20:40:34.845717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.931 [2024-12-12 20:40:34.845745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:50.931 [2024-12-12 20:40:34.845755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.906 ms 00:29:50.931 [2024-12-12 20:40:34.845763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.931 [2024-12-12 20:40:34.877546] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:29:50.931 [2024-12-12 20:40:34.877585] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:29:50.931 [2024-12-12 20:40:34.877598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.931 [2024-12-12 20:40:34.877607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:29:50.931 [2024-12-12 20:40:34.877617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.737 ms 00:29:50.931 [2024-12-12 20:40:34.877624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.931 [2024-12-12 20:40:34.891190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.931 [2024-12-12 20:40:34.891319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:29:50.931 [2024-12-12 20:40:34.891336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.540 ms 00:29:50.931 [2024-12-12 20:40:34.891344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.931 [2024-12-12 20:40:34.902623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.931 [2024-12-12 20:40:34.902653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:29:50.931 [2024-12-12 20:40:34.902663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.243 ms 00:29:50.931 [2024-12-12 20:40:34.902670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.931 [2024-12-12 20:40:34.914060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.931 [2024-12-12 20:40:34.914172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:29:50.931 [2024-12-12 20:40:34.914187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.366 ms 00:29:50.931 [2024-12-12 20:40:34.914194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.931 [2024-12-12 20:40:34.914818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.931 [2024-12-12 20:40:34.914837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:29:50.931 [2024-12-12 20:40:34.914846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.551 ms 00:29:50.931 [2024-12-12 20:40:34.914854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.931 [2024-12-12 20:40:34.970388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.931 [2024-12-12 20:40:34.970451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:29:50.931 [2024-12-12 20:40:34.970463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 55.516 ms 00:29:50.931 [2024-12-12 20:40:34.970472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.931 [2024-12-12 20:40:34.980826] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:29:50.932 [2024-12-12 20:40:34.981615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.932 [2024-12-12 20:40:34.981642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:29:50.932 [2024-12-12 20:40:34.981652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.092 ms 00:29:50.932 [2024-12-12 20:40:34.981660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.932 [2024-12-12 20:40:34.981740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.932 [2024-12-12 20:40:34.981753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:29:50.932 [2024-12-12 20:40:34.981762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:29:50.932 [2024-12-12 20:40:34.981770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.932 [2024-12-12 20:40:34.981824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.932 [2024-12-12 20:40:34.981834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:29:50.932 [2024-12-12 20:40:34.981842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:29:50.932 [2024-12-12 20:40:34.981850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.932 [2024-12-12 20:40:34.981869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.932 [2024-12-12 20:40:34.981877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:29:50.932 [2024-12-12 20:40:34.981888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:50.932 [2024-12-12 20:40:34.981896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.932 [2024-12-12 20:40:34.981926] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:29:50.932 [2024-12-12 20:40:34.981936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.932 [2024-12-12 20:40:34.981944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:29:50.932 [2024-12-12 20:40:34.981952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:29:50.932 [2024-12-12 20:40:34.981959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.932 [2024-12-12 20:40:35.004849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.932 [2024-12-12 20:40:35.004885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:29:50.932 [2024-12-12 20:40:35.004896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.871 ms 00:29:50.932 [2024-12-12 20:40:35.004904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.932 [2024-12-12 20:40:35.004968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.932 [2024-12-12 20:40:35.004977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:29:50.932 [2024-12-12 20:40:35.004986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:29:50.932 [2024-12-12 20:40:35.004993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.932 [2024-12-12 20:40:35.005884] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3232.931 ms, result 0 00:29:50.932 [2024-12-12 20:40:35.021191] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:50.932 [2024-12-12 20:40:35.037172] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:29:50.932 [2024-12-12 20:40:35.045300] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:50.932 20:40:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:50.932 20:40:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:29:50.932 20:40:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:50.932 20:40:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:29:50.932 20:40:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:29:51.191 [2024-12-12 20:40:35.273367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.191 [2024-12-12 20:40:35.273409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:29:51.191 [2024-12-12 20:40:35.273444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:29:51.191 [2024-12-12 20:40:35.273452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.191 [2024-12-12 20:40:35.273475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.191 [2024-12-12 20:40:35.273484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:29:51.191 [2024-12-12 20:40:35.273492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:51.191 [2024-12-12 20:40:35.273499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.191 [2024-12-12 20:40:35.273519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.191 [2024-12-12 20:40:35.273528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:29:51.191 [2024-12-12 20:40:35.273535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:51.191 [2024-12-12 20:40:35.273543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.191 [2024-12-12 20:40:35.273601] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.224 ms, result 0 00:29:51.191 true 00:29:51.191 20:40:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:51.449 { 00:29:51.449 "name": "ftl", 00:29:51.449 "properties": [ 00:29:51.449 { 00:29:51.449 "name": "superblock_version", 00:29:51.449 "value": 5, 00:29:51.449 "read-only": true 00:29:51.449 }, 00:29:51.449 { 00:29:51.449 "name": "base_device", 00:29:51.449 "bands": [ 00:29:51.449 { 00:29:51.449 "id": 0, 00:29:51.449 "state": "CLOSED", 00:29:51.449 "validity": 1.0 00:29:51.449 }, 00:29:51.449 { 00:29:51.449 "id": 1, 00:29:51.449 "state": "CLOSED", 00:29:51.449 "validity": 1.0 00:29:51.449 }, 00:29:51.449 { 00:29:51.449 "id": 2, 00:29:51.449 "state": "CLOSED", 00:29:51.449 "validity": 0.007843137254901933 00:29:51.449 }, 00:29:51.449 { 00:29:51.449 "id": 3, 00:29:51.449 "state": "FREE", 00:29:51.449 "validity": 0.0 00:29:51.449 }, 00:29:51.449 { 00:29:51.449 "id": 4, 00:29:51.449 "state": "FREE", 00:29:51.449 "validity": 0.0 00:29:51.449 }, 00:29:51.449 { 00:29:51.449 "id": 5, 00:29:51.449 "state": "FREE", 00:29:51.449 "validity": 0.0 00:29:51.449 }, 00:29:51.449 { 00:29:51.449 "id": 6, 00:29:51.449 "state": "FREE", 00:29:51.449 "validity": 0.0 00:29:51.449 }, 00:29:51.449 { 00:29:51.449 "id": 7, 00:29:51.449 "state": "FREE", 00:29:51.449 "validity": 0.0 00:29:51.449 }, 00:29:51.449 { 00:29:51.449 "id": 8, 00:29:51.449 "state": "FREE", 00:29:51.449 "validity": 0.0 00:29:51.449 }, 00:29:51.449 { 00:29:51.449 "id": 9, 00:29:51.449 "state": "FREE", 00:29:51.449 "validity": 0.0 00:29:51.449 }, 00:29:51.449 { 00:29:51.449 "id": 10, 00:29:51.449 "state": "FREE", 00:29:51.449 "validity": 0.0 00:29:51.449 }, 00:29:51.449 { 00:29:51.449 "id": 11, 00:29:51.449 "state": "FREE", 00:29:51.449 "validity": 0.0 00:29:51.449 }, 00:29:51.449 { 00:29:51.449 "id": 12, 00:29:51.449 "state": "FREE", 00:29:51.449 "validity": 0.0 00:29:51.449 }, 00:29:51.449 { 00:29:51.449 "id": 13, 00:29:51.449 "state": "FREE", 00:29:51.449 "validity": 0.0 00:29:51.449 }, 00:29:51.449 { 00:29:51.449 "id": 14, 00:29:51.449 "state": "FREE", 00:29:51.449 "validity": 0.0 00:29:51.449 }, 00:29:51.449 { 00:29:51.449 "id": 15, 00:29:51.449 "state": "FREE", 00:29:51.449 "validity": 0.0 00:29:51.449 }, 00:29:51.449 { 00:29:51.449 "id": 16, 00:29:51.449 "state": "FREE", 00:29:51.449 "validity": 0.0 00:29:51.449 }, 00:29:51.449 { 00:29:51.449 "id": 17, 00:29:51.449 "state": "FREE", 00:29:51.449 "validity": 0.0 00:29:51.449 } 00:29:51.449 ], 00:29:51.449 "read-only": true 00:29:51.449 }, 00:29:51.449 { 00:29:51.449 "name": "cache_device", 00:29:51.449 "type": "bdev", 00:29:51.449 "chunks": [ 00:29:51.449 { 00:29:51.449 "id": 0, 00:29:51.449 "state": "INACTIVE", 00:29:51.449 "utilization": 0.0 00:29:51.449 }, 00:29:51.449 { 00:29:51.449 "id": 1, 00:29:51.449 "state": "OPEN", 00:29:51.449 "utilization": 0.0 00:29:51.449 }, 00:29:51.449 { 00:29:51.449 "id": 2, 00:29:51.449 "state": "OPEN", 00:29:51.449 "utilization": 0.0 00:29:51.449 }, 00:29:51.449 { 00:29:51.449 "id": 3, 00:29:51.449 "state": "FREE", 00:29:51.449 "utilization": 0.0 00:29:51.449 }, 00:29:51.449 { 00:29:51.449 "id": 4, 00:29:51.449 "state": "FREE", 00:29:51.449 "utilization": 0.0 00:29:51.449 } 00:29:51.449 ], 00:29:51.449 "read-only": true 00:29:51.449 }, 00:29:51.449 { 00:29:51.449 "name": "verbose_mode", 00:29:51.449 "value": true, 00:29:51.449 "unit": "", 00:29:51.449 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:29:51.449 }, 00:29:51.449 { 00:29:51.450 "name": "prep_upgrade_on_shutdown", 00:29:51.450 "value": false, 00:29:51.450 "unit": "", 00:29:51.450 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:29:51.450 } 00:29:51.450 ] 00:29:51.450 } 00:29:51.450 20:40:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:29:51.450 20:40:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:29:51.450 20:40:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:51.707 20:40:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:29:51.707 20:40:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:29:51.707 20:40:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:29:51.707 20:40:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:29:51.707 20:40:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:51.707 Validate MD5 checksum, iteration 1 00:29:51.707 20:40:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:29:51.707 20:40:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:29:51.707 20:40:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:29:51.707 20:40:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:29:51.707 20:40:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:29:51.707 20:40:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:51.707 20:40:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:29:51.707 20:40:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:51.707 20:40:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:51.707 20:40:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:51.707 20:40:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:51.707 20:40:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:51.707 20:40:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:51.965 [2024-12-12 20:40:35.981702] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:29:51.965 [2024-12-12 20:40:35.981967] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84658 ] 00:29:51.965 [2024-12-12 20:40:36.141892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:52.223 [2024-12-12 20:40:36.240036] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:53.598  [2024-12-12T20:40:38.393Z] Copying: 628/1024 [MB] (628 MBps) [2024-12-12T20:40:39.328Z] Copying: 1024/1024 [MB] (average 647 MBps) 00:29:55.100 00:29:55.100 20:40:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:29:55.100 20:40:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:57.000 20:40:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:29:57.000 20:40:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=6abb9b47131d540a997d42be796bce7c 00:29:57.000 20:40:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 6abb9b47131d540a997d42be796bce7c != \6\a\b\b\9\b\4\7\1\3\1\d\5\4\0\a\9\9\7\d\4\2\b\e\7\9\6\b\c\e\7\c ]] 00:29:57.000 20:40:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:29:57.000 20:40:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:57.000 20:40:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:29:57.000 Validate MD5 checksum, iteration 2 00:29:57.000 20:40:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:57.000 20:40:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:57.000 20:40:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:57.000 20:40:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:57.000 20:40:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:57.000 20:40:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:57.000 [2024-12-12 20:40:40.885390] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:29:57.000 [2024-12-12 20:40:40.885641] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84709 ] 00:29:57.000 [2024-12-12 20:40:41.044066] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:57.000 [2024-12-12 20:40:41.136036] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:58.900  [2024-12-12T20:40:43.385Z] Copying: 655/1024 [MB] (655 MBps) [2024-12-12T20:40:44.319Z] Copying: 1024/1024 [MB] (average 652 MBps) 00:30:00.091 00:30:00.091 20:40:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:30:00.091 20:40:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:02.618 20:40:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:02.618 20:40:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=b9cec5914ef7d70e0fa6987cfc51c6c5 00:30:02.618 20:40:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ b9cec5914ef7d70e0fa6987cfc51c6c5 != \b\9\c\e\c\5\9\1\4\e\f\7\d\7\0\e\0\f\a\6\9\8\7\c\f\c\5\1\c\6\c\5 ]] 00:30:02.618 20:40:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:02.618 20:40:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:02.618 20:40:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:30:02.618 20:40:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 84576 ]] 00:30:02.618 20:40:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 84576 00:30:02.618 20:40:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:30:02.618 20:40:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:30:02.618 20:40:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:30:02.618 20:40:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:30:02.618 20:40:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:02.618 20:40:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84771 00:30:02.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:02.618 20:40:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:30:02.618 20:40:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84771 00:30:02.618 20:40:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84771 ']' 00:30:02.618 20:40:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:02.618 20:40:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:02.618 20:40:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:02.618 20:40:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:02.618 20:40:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:02.618 20:40:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:02.618 [2024-12-12 20:40:46.397502] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:30:02.618 [2024-12-12 20:40:46.397586] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84771 ] 00:30:02.618 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 84576 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:30:02.618 [2024-12-12 20:40:46.551940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:02.618 [2024-12-12 20:40:46.651626] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:30:03.184 [2024-12-12 20:40:47.345723] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:03.184 [2024-12-12 20:40:47.345792] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:03.444 [2024-12-12 20:40:47.493921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:03.444 [2024-12-12 20:40:47.493973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:30:03.444 [2024-12-12 20:40:47.493986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:03.444 [2024-12-12 20:40:47.493994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:03.444 [2024-12-12 20:40:47.494047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:03.444 [2024-12-12 20:40:47.494058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:03.444 [2024-12-12 20:40:47.494066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:30:03.444 [2024-12-12 20:40:47.494073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:03.444 [2024-12-12 20:40:47.494097] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:30:03.444 [2024-12-12 20:40:47.494824] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:30:03.444 [2024-12-12 20:40:47.494842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:03.444 [2024-12-12 20:40:47.494850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:03.444 [2024-12-12 20:40:47.494858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.752 ms 00:30:03.444 [2024-12-12 20:40:47.494865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:03.444 [2024-12-12 20:40:47.495172] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:30:03.444 [2024-12-12 20:40:47.511522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:03.444 [2024-12-12 20:40:47.511557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:30:03.444 [2024-12-12 20:40:47.511568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.352 ms 00:30:03.444 [2024-12-12 20:40:47.511575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:03.444 [2024-12-12 20:40:47.520339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:03.444 [2024-12-12 20:40:47.520372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:30:03.444 [2024-12-12 20:40:47.520382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:30:03.444 [2024-12-12 20:40:47.520389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:03.444 [2024-12-12 20:40:47.520708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:03.444 [2024-12-12 20:40:47.520719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:03.444 [2024-12-12 20:40:47.520728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.229 ms 00:30:03.444 [2024-12-12 20:40:47.520735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:03.444 [2024-12-12 20:40:47.520782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:03.444 [2024-12-12 20:40:47.520796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:03.444 [2024-12-12 20:40:47.520804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:30:03.444 [2024-12-12 20:40:47.520812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:03.444 [2024-12-12 20:40:47.520834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:03.444 [2024-12-12 20:40:47.520842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:30:03.444 [2024-12-12 20:40:47.520850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:30:03.444 [2024-12-12 20:40:47.520857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:03.444 [2024-12-12 20:40:47.520876] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:30:03.444 [2024-12-12 20:40:47.523758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:03.444 [2024-12-12 20:40:47.523783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:03.444 [2024-12-12 20:40:47.523792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.887 ms 00:30:03.444 [2024-12-12 20:40:47.523799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:03.444 [2024-12-12 20:40:47.523830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:03.444 [2024-12-12 20:40:47.523838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:30:03.444 [2024-12-12 20:40:47.523846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:03.444 [2024-12-12 20:40:47.523853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:03.444 [2024-12-12 20:40:47.523872] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:30:03.444 [2024-12-12 20:40:47.523890] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:30:03.444 [2024-12-12 20:40:47.523925] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:30:03.444 [2024-12-12 20:40:47.523942] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:30:03.444 [2024-12-12 20:40:47.524042] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:30:03.444 [2024-12-12 20:40:47.524052] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:30:03.444 [2024-12-12 20:40:47.524063] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:30:03.444 [2024-12-12 20:40:47.524073] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:30:03.444 [2024-12-12 20:40:47.524082] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:30:03.444 [2024-12-12 20:40:47.524090] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:30:03.444 [2024-12-12 20:40:47.524097] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:30:03.444 [2024-12-12 20:40:47.524104] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:30:03.444 [2024-12-12 20:40:47.524111] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:30:03.444 [2024-12-12 20:40:47.524118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:03.444 [2024-12-12 20:40:47.524127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:30:03.444 [2024-12-12 20:40:47.524135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.248 ms 00:30:03.444 [2024-12-12 20:40:47.524141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:03.444 [2024-12-12 20:40:47.524224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:03.444 [2024-12-12 20:40:47.524232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:30:03.444 [2024-12-12 20:40:47.524239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.068 ms 00:30:03.444 [2024-12-12 20:40:47.524246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:03.444 [2024-12-12 20:40:47.524346] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:30:03.444 [2024-12-12 20:40:47.524355] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:30:03.444 [2024-12-12 20:40:47.524366] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:03.444 [2024-12-12 20:40:47.524373] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:03.444 [2024-12-12 20:40:47.524381] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:30:03.444 [2024-12-12 20:40:47.524387] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:30:03.444 [2024-12-12 20:40:47.524394] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:30:03.444 [2024-12-12 20:40:47.524400] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:30:03.444 [2024-12-12 20:40:47.524408] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:30:03.444 [2024-12-12 20:40:47.524430] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:03.444 [2024-12-12 20:40:47.524437] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:30:03.444 [2024-12-12 20:40:47.524443] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:30:03.444 [2024-12-12 20:40:47.524450] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:03.444 [2024-12-12 20:40:47.524459] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:30:03.444 [2024-12-12 20:40:47.524465] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:30:03.444 [2024-12-12 20:40:47.524472] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:03.444 [2024-12-12 20:40:47.524478] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:30:03.444 [2024-12-12 20:40:47.524485] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:30:03.444 [2024-12-12 20:40:47.524491] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:03.444 [2024-12-12 20:40:47.524498] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:30:03.444 [2024-12-12 20:40:47.524505] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:30:03.444 [2024-12-12 20:40:47.524517] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:03.444 [2024-12-12 20:40:47.524524] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:30:03.444 [2024-12-12 20:40:47.524530] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:30:03.444 [2024-12-12 20:40:47.524537] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:03.444 [2024-12-12 20:40:47.524543] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:30:03.444 [2024-12-12 20:40:47.524557] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:30:03.444 [2024-12-12 20:40:47.524564] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:03.444 [2024-12-12 20:40:47.524570] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:30:03.444 [2024-12-12 20:40:47.524576] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:30:03.444 [2024-12-12 20:40:47.524583] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:03.444 [2024-12-12 20:40:47.524589] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:30:03.444 [2024-12-12 20:40:47.524596] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:30:03.444 [2024-12-12 20:40:47.524602] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:03.444 [2024-12-12 20:40:47.524608] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:30:03.444 [2024-12-12 20:40:47.524615] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:30:03.444 [2024-12-12 20:40:47.524621] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:03.444 [2024-12-12 20:40:47.524628] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:30:03.444 [2024-12-12 20:40:47.524634] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:30:03.445 [2024-12-12 20:40:47.524641] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:03.445 [2024-12-12 20:40:47.524647] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:30:03.445 [2024-12-12 20:40:47.524653] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:30:03.445 [2024-12-12 20:40:47.524659] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:03.445 [2024-12-12 20:40:47.524665] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:30:03.445 [2024-12-12 20:40:47.524673] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:30:03.445 [2024-12-12 20:40:47.524680] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:03.445 [2024-12-12 20:40:47.524687] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:03.445 [2024-12-12 20:40:47.524694] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:30:03.445 [2024-12-12 20:40:47.524701] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:30:03.445 [2024-12-12 20:40:47.524707] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:30:03.445 [2024-12-12 20:40:47.524714] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:30:03.445 [2024-12-12 20:40:47.524720] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:30:03.445 [2024-12-12 20:40:47.524727] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:30:03.445 [2024-12-12 20:40:47.524735] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:30:03.445 [2024-12-12 20:40:47.524744] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:03.445 [2024-12-12 20:40:47.524752] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:30:03.445 [2024-12-12 20:40:47.524759] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:30:03.445 [2024-12-12 20:40:47.524767] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:30:03.445 [2024-12-12 20:40:47.524773] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:30:03.445 [2024-12-12 20:40:47.524780] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:30:03.445 [2024-12-12 20:40:47.524787] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:30:03.445 [2024-12-12 20:40:47.524794] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:30:03.445 [2024-12-12 20:40:47.524805] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:30:03.445 [2024-12-12 20:40:47.524812] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:30:03.445 [2024-12-12 20:40:47.524818] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:30:03.445 [2024-12-12 20:40:47.524825] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:30:03.445 [2024-12-12 20:40:47.524832] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:30:03.445 [2024-12-12 20:40:47.524838] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:30:03.445 [2024-12-12 20:40:47.524845] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:30:03.445 [2024-12-12 20:40:47.524853] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:30:03.445 [2024-12-12 20:40:47.524861] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:03.445 [2024-12-12 20:40:47.524871] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:03.445 [2024-12-12 20:40:47.524878] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:30:03.445 [2024-12-12 20:40:47.524884] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:30:03.445 [2024-12-12 20:40:47.524891] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:30:03.445 [2024-12-12 20:40:47.524898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:03.445 [2024-12-12 20:40:47.524905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:30:03.445 [2024-12-12 20:40:47.524913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.621 ms 00:30:03.445 [2024-12-12 20:40:47.524920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:03.445 [2024-12-12 20:40:47.548532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:03.445 [2024-12-12 20:40:47.548566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:03.445 [2024-12-12 20:40:47.548577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.553 ms 00:30:03.445 [2024-12-12 20:40:47.548585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:03.445 [2024-12-12 20:40:47.548623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:03.445 [2024-12-12 20:40:47.548630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:30:03.445 [2024-12-12 20:40:47.548638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:30:03.445 [2024-12-12 20:40:47.548646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:03.445 [2024-12-12 20:40:47.578701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:03.445 [2024-12-12 20:40:47.578841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:03.445 [2024-12-12 20:40:47.578857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.003 ms 00:30:03.445 [2024-12-12 20:40:47.578865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:03.445 [2024-12-12 20:40:47.578896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:03.445 [2024-12-12 20:40:47.578904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:03.445 [2024-12-12 20:40:47.578912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:03.445 [2024-12-12 20:40:47.578923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:03.445 [2024-12-12 20:40:47.579014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:03.445 [2024-12-12 20:40:47.579024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:03.445 [2024-12-12 20:40:47.579032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.040 ms 00:30:03.445 [2024-12-12 20:40:47.579039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:03.445 [2024-12-12 20:40:47.579076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:03.445 [2024-12-12 20:40:47.579083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:03.445 [2024-12-12 20:40:47.579091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:30:03.445 [2024-12-12 20:40:47.579098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:03.445 [2024-12-12 20:40:47.593012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:03.445 [2024-12-12 20:40:47.593042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:03.445 [2024-12-12 20:40:47.593052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.894 ms 00:30:03.445 [2024-12-12 20:40:47.593059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:03.445 [2024-12-12 20:40:47.593172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:03.445 [2024-12-12 20:40:47.593182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:30:03.445 [2024-12-12 20:40:47.593191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:03.445 [2024-12-12 20:40:47.593198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:03.445 [2024-12-12 20:40:47.622238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:03.445 [2024-12-12 20:40:47.622280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:30:03.445 [2024-12-12 20:40:47.622293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 29.022 ms 00:30:03.445 [2024-12-12 20:40:47.622302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:03.445 [2024-12-12 20:40:47.631712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:03.445 [2024-12-12 20:40:47.631756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:30:03.445 [2024-12-12 20:40:47.631773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.535 ms 00:30:03.445 [2024-12-12 20:40:47.631780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:03.703 [2024-12-12 20:40:47.687063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:03.703 [2024-12-12 20:40:47.687118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:30:03.703 [2024-12-12 20:40:47.687131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 55.230 ms 00:30:03.703 [2024-12-12 20:40:47.687140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:03.703 [2024-12-12 20:40:47.687274] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:30:03.703 [2024-12-12 20:40:47.687368] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:30:03.703 [2024-12-12 20:40:47.687473] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:30:03.703 [2024-12-12 20:40:47.687560] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:30:03.704 [2024-12-12 20:40:47.687570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:03.704 [2024-12-12 20:40:47.687578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:30:03.704 [2024-12-12 20:40:47.687586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.380 ms 00:30:03.704 [2024-12-12 20:40:47.687594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:03.704 [2024-12-12 20:40:47.687654] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:30:03.704 [2024-12-12 20:40:47.687666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:03.704 [2024-12-12 20:40:47.687677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:30:03.704 [2024-12-12 20:40:47.687686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:30:03.704 [2024-12-12 20:40:47.687693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:03.704 [2024-12-12 20:40:47.702732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:03.704 [2024-12-12 20:40:47.702776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:30:03.704 [2024-12-12 20:40:47.702788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.018 ms 00:30:03.704 [2024-12-12 20:40:47.702796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:03.704 [2024-12-12 20:40:47.711226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:03.704 [2024-12-12 20:40:47.711259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:30:03.704 [2024-12-12 20:40:47.711268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:30:03.704 [2024-12-12 20:40:47.711275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:03.704 [2024-12-12 20:40:47.711361] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:30:03.704 [2024-12-12 20:40:47.711513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:03.704 [2024-12-12 20:40:47.711525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:30:03.704 [2024-12-12 20:40:47.711533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.153 ms 00:30:03.704 [2024-12-12 20:40:47.711540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:04.270 [2024-12-12 20:40:48.267311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:04.270 [2024-12-12 20:40:48.267371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:30:04.270 [2024-12-12 20:40:48.267386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 555.019 ms 00:30:04.270 [2024-12-12 20:40:48.267394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:04.270 [2024-12-12 20:40:48.271154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:04.270 [2024-12-12 20:40:48.271190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:30:04.270 [2024-12-12 20:40:48.271201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.944 ms 00:30:04.270 [2024-12-12 20:40:48.271208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:04.270 [2024-12-12 20:40:48.271624] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:30:04.270 [2024-12-12 20:40:48.271647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:04.270 [2024-12-12 20:40:48.271655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:30:04.270 [2024-12-12 20:40:48.271664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.407 ms 00:30:04.270 [2024-12-12 20:40:48.271672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:04.270 [2024-12-12 20:40:48.271698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:04.270 [2024-12-12 20:40:48.271707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:30:04.270 [2024-12-12 20:40:48.271715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:04.270 [2024-12-12 20:40:48.271727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:04.270 [2024-12-12 20:40:48.271760] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 560.397 ms, result 0 00:30:04.270 [2024-12-12 20:40:48.271797] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:30:04.270 [2024-12-12 20:40:48.271860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:04.270 [2024-12-12 20:40:48.271870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:30:04.270 [2024-12-12 20:40:48.271878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.064 ms 00:30:04.270 [2024-12-12 20:40:48.271884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:04.836 [2024-12-12 20:40:48.801871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:04.836 [2024-12-12 20:40:48.801928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:30:04.837 [2024-12-12 20:40:48.801953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 529.102 ms 00:30:04.837 [2024-12-12 20:40:48.801962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:04.837 [2024-12-12 20:40:48.805678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:04.837 [2024-12-12 20:40:48.805713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:30:04.837 [2024-12-12 20:40:48.805722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.934 ms 00:30:04.837 [2024-12-12 20:40:48.805730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:04.837 [2024-12-12 20:40:48.806170] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:30:04.837 [2024-12-12 20:40:48.806189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:04.837 [2024-12-12 20:40:48.806197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:30:04.837 [2024-12-12 20:40:48.806205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.430 ms 00:30:04.837 [2024-12-12 20:40:48.806213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:04.837 [2024-12-12 20:40:48.806253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:04.837 [2024-12-12 20:40:48.806262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:30:04.837 [2024-12-12 20:40:48.806270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:04.837 [2024-12-12 20:40:48.806277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:04.837 [2024-12-12 20:40:48.806312] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 534.509 ms, result 0 00:30:04.837 [2024-12-12 20:40:48.806350] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:30:04.837 [2024-12-12 20:40:48.806359] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:30:04.837 [2024-12-12 20:40:48.806368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:04.837 [2024-12-12 20:40:48.806375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:30:04.837 [2024-12-12 20:40:48.806384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1095.026 ms 00:30:04.837 [2024-12-12 20:40:48.806391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:04.837 [2024-12-12 20:40:48.806431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:04.837 [2024-12-12 20:40:48.806443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:30:04.837 [2024-12-12 20:40:48.806451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:04.837 [2024-12-12 20:40:48.806459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:04.837 [2024-12-12 20:40:48.817145] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:30:04.837 [2024-12-12 20:40:48.817250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:04.837 [2024-12-12 20:40:48.817260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:30:04.837 [2024-12-12 20:40:48.817269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.777 ms 00:30:04.837 [2024-12-12 20:40:48.817276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:04.837 [2024-12-12 20:40:48.817973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:04.837 [2024-12-12 20:40:48.817989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:30:04.837 [2024-12-12 20:40:48.818002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.628 ms 00:30:04.837 [2024-12-12 20:40:48.818009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:04.837 [2024-12-12 20:40:48.820228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:04.837 [2024-12-12 20:40:48.820247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:30:04.837 [2024-12-12 20:40:48.820257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.203 ms 00:30:04.837 [2024-12-12 20:40:48.820265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:04.837 [2024-12-12 20:40:48.820301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:04.837 [2024-12-12 20:40:48.820310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:30:04.837 [2024-12-12 20:40:48.820318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:04.837 [2024-12-12 20:40:48.820327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:04.837 [2024-12-12 20:40:48.820436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:04.837 [2024-12-12 20:40:48.820446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:30:04.837 [2024-12-12 20:40:48.820454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:30:04.837 [2024-12-12 20:40:48.820461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:04.837 [2024-12-12 20:40:48.820480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:04.837 [2024-12-12 20:40:48.820488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:30:04.837 [2024-12-12 20:40:48.820495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:04.837 [2024-12-12 20:40:48.820502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:04.837 [2024-12-12 20:40:48.820533] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:30:04.837 [2024-12-12 20:40:48.820541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:04.837 [2024-12-12 20:40:48.820549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:30:04.837 [2024-12-12 20:40:48.820555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:30:04.837 [2024-12-12 20:40:48.820562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:04.837 [2024-12-12 20:40:48.820611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:04.837 [2024-12-12 20:40:48.820619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:30:04.837 [2024-12-12 20:40:48.820627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:30:04.837 [2024-12-12 20:40:48.820634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:04.837 [2024-12-12 20:40:48.821504] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1327.130 ms, result 0 00:30:04.837 [2024-12-12 20:40:48.833845] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:04.837 [2024-12-12 20:40:48.849837] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:30:04.837 [2024-12-12 20:40:48.857945] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:04.837 Validate MD5 checksum, iteration 1 00:30:04.837 20:40:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:04.837 20:40:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:30:04.837 20:40:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:04.837 20:40:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:30:04.837 20:40:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:30:04.837 20:40:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:30:04.837 20:40:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:30:04.837 20:40:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:04.837 20:40:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:30:04.837 20:40:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:04.837 20:40:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:04.837 20:40:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:04.837 20:40:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:04.837 20:40:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:04.837 20:40:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:04.837 [2024-12-12 20:40:49.014176] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:30:04.837 [2024-12-12 20:40:49.014294] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84805 ] 00:30:05.096 [2024-12-12 20:40:49.175380] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:05.096 [2024-12-12 20:40:49.275580] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:30:06.996  [2024-12-12T20:40:51.485Z] Copying: 688/1024 [MB] (688 MBps) [2024-12-12T20:40:55.719Z] Copying: 1024/1024 [MB] (average 688 MBps) 00:30:11.491 00:30:11.491 20:40:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:30:11.491 20:40:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:13.391 20:40:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:13.391 Validate MD5 checksum, iteration 2 00:30:13.391 20:40:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=6abb9b47131d540a997d42be796bce7c 00:30:13.391 20:40:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 6abb9b47131d540a997d42be796bce7c != \6\a\b\b\9\b\4\7\1\3\1\d\5\4\0\a\9\9\7\d\4\2\b\e\7\9\6\b\c\e\7\c ]] 00:30:13.391 20:40:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:13.391 20:40:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:13.391 20:40:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:30:13.391 20:40:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:13.391 20:40:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:13.391 20:40:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:13.391 20:40:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:13.391 20:40:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:13.391 20:40:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:13.391 [2024-12-12 20:40:57.254459] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:30:13.391 [2024-12-12 20:40:57.254688] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84893 ] 00:30:13.391 [2024-12-12 20:40:57.415104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:13.391 [2024-12-12 20:40:57.511975] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:30:15.292  [2024-12-12T20:40:59.520Z] Copying: 685/1024 [MB] (685 MBps) [2024-12-12T20:41:02.049Z] Copying: 1024/1024 [MB] (average 679 MBps) 00:30:17.821 00:30:17.821 20:41:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:30:17.821 20:41:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:19.780 20:41:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:19.780 20:41:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=b9cec5914ef7d70e0fa6987cfc51c6c5 00:30:19.780 20:41:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ b9cec5914ef7d70e0fa6987cfc51c6c5 != \b\9\c\e\c\5\9\1\4\e\f\7\d\7\0\e\0\f\a\6\9\8\7\c\f\c\5\1\c\6\c\5 ]] 00:30:19.780 20:41:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:19.780 20:41:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:19.780 20:41:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:30:19.780 20:41:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:30:19.780 20:41:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:30:19.780 20:41:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:19.780 20:41:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:30:19.780 20:41:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:30:19.780 20:41:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:30:19.780 20:41:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:30:19.780 20:41:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 84771 ]] 00:30:19.780 20:41:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 84771 00:30:19.780 20:41:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 84771 ']' 00:30:19.780 20:41:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 84771 00:30:19.780 20:41:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:30:19.780 20:41:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:19.780 20:41:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84771 00:30:19.780 killing process with pid 84771 00:30:19.780 20:41:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:19.780 20:41:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:19.780 20:41:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84771' 00:30:19.780 20:41:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 84771 00:30:19.780 20:41:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 84771 00:30:20.347 [2024-12-12 20:41:04.401210] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:30:20.347 [2024-12-12 20:41:04.412709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:20.347 [2024-12-12 20:41:04.412744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:30:20.347 [2024-12-12 20:41:04.412755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:20.347 [2024-12-12 20:41:04.412761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.347 [2024-12-12 20:41:04.412779] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:30:20.347 [2024-12-12 20:41:04.414836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:20.347 [2024-12-12 20:41:04.414862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:30:20.347 [2024-12-12 20:41:04.414874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.046 ms 00:30:20.347 [2024-12-12 20:41:04.414880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.347 [2024-12-12 20:41:04.415061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:20.347 [2024-12-12 20:41:04.415069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:30:20.347 [2024-12-12 20:41:04.415076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.164 ms 00:30:20.347 [2024-12-12 20:41:04.415082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.347 [2024-12-12 20:41:04.416148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:20.347 [2024-12-12 20:41:04.416260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:30:20.347 [2024-12-12 20:41:04.416272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.055 ms 00:30:20.347 [2024-12-12 20:41:04.416282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.347 [2024-12-12 20:41:04.417153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:20.347 [2024-12-12 20:41:04.417169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:30:20.347 [2024-12-12 20:41:04.417177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.845 ms 00:30:20.347 [2024-12-12 20:41:04.417183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.347 [2024-12-12 20:41:04.424923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:20.347 [2024-12-12 20:41:04.424951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:30:20.347 [2024-12-12 20:41:04.424960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.714 ms 00:30:20.347 [2024-12-12 20:41:04.424969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.347 [2024-12-12 20:41:04.429256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:20.347 [2024-12-12 20:41:04.429287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:30:20.347 [2024-12-12 20:41:04.429295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.259 ms 00:30:20.347 [2024-12-12 20:41:04.429302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.347 [2024-12-12 20:41:04.429366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:20.347 [2024-12-12 20:41:04.429373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:30:20.347 [2024-12-12 20:41:04.429381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:30:20.347 [2024-12-12 20:41:04.429390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.347 [2024-12-12 20:41:04.436616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:20.347 [2024-12-12 20:41:04.436644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:30:20.347 [2024-12-12 20:41:04.436651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.214 ms 00:30:20.347 [2024-12-12 20:41:04.436657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.347 [2024-12-12 20:41:04.443587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:20.347 [2024-12-12 20:41:04.443616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:30:20.347 [2024-12-12 20:41:04.443622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.905 ms 00:30:20.347 [2024-12-12 20:41:04.443628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.347 [2024-12-12 20:41:04.450949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:20.347 [2024-12-12 20:41:04.450977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:30:20.347 [2024-12-12 20:41:04.450983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.296 ms 00:30:20.347 [2024-12-12 20:41:04.450989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.347 [2024-12-12 20:41:04.457932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:20.347 [2024-12-12 20:41:04.457960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:30:20.347 [2024-12-12 20:41:04.457967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.898 ms 00:30:20.347 [2024-12-12 20:41:04.457973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.347 [2024-12-12 20:41:04.457998] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:30:20.347 [2024-12-12 20:41:04.458010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:30:20.347 [2024-12-12 20:41:04.458018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:30:20.347 [2024-12-12 20:41:04.458024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:30:20.347 [2024-12-12 20:41:04.458030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:20.347 [2024-12-12 20:41:04.458036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:20.347 [2024-12-12 20:41:04.458042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:20.347 [2024-12-12 20:41:04.458047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:20.347 [2024-12-12 20:41:04.458053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:20.347 [2024-12-12 20:41:04.458058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:20.347 [2024-12-12 20:41:04.458064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:20.347 [2024-12-12 20:41:04.458069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:20.347 [2024-12-12 20:41:04.458075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:20.347 [2024-12-12 20:41:04.458080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:20.347 [2024-12-12 20:41:04.458085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:20.347 [2024-12-12 20:41:04.458091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:20.347 [2024-12-12 20:41:04.458096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:20.347 [2024-12-12 20:41:04.458102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:20.347 [2024-12-12 20:41:04.458107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:20.347 [2024-12-12 20:41:04.458114] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:30:20.347 [2024-12-12 20:41:04.458119] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: b25e1174-f384-4aa0-a287-d33b4249ab0a 00:30:20.347 [2024-12-12 20:41:04.458125] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:30:20.347 [2024-12-12 20:41:04.458131] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:30:20.347 [2024-12-12 20:41:04.458136] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:30:20.347 [2024-12-12 20:41:04.458141] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:30:20.347 [2024-12-12 20:41:04.458147] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:30:20.347 [2024-12-12 20:41:04.458152] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:30:20.347 [2024-12-12 20:41:04.458162] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:30:20.347 [2024-12-12 20:41:04.458166] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:30:20.347 [2024-12-12 20:41:04.458171] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:30:20.347 [2024-12-12 20:41:04.458176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:20.347 [2024-12-12 20:41:04.458182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:30:20.347 [2024-12-12 20:41:04.458189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.178 ms 00:30:20.347 [2024-12-12 20:41:04.458196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.347 [2024-12-12 20:41:04.467602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:20.347 [2024-12-12 20:41:04.467629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:30:20.347 [2024-12-12 20:41:04.467637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.393 ms 00:30:20.347 [2024-12-12 20:41:04.467643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.347 [2024-12-12 20:41:04.467914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:20.347 [2024-12-12 20:41:04.467924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:30:20.347 [2024-12-12 20:41:04.467931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.253 ms 00:30:20.347 [2024-12-12 20:41:04.467937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.347 [2024-12-12 20:41:04.500494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:20.347 [2024-12-12 20:41:04.500525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:20.347 [2024-12-12 20:41:04.500533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:20.347 [2024-12-12 20:41:04.500543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.347 [2024-12-12 20:41:04.500565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:20.348 [2024-12-12 20:41:04.500572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:20.348 [2024-12-12 20:41:04.500578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:20.348 [2024-12-12 20:41:04.500584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.348 [2024-12-12 20:41:04.500648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:20.348 [2024-12-12 20:41:04.500657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:20.348 [2024-12-12 20:41:04.500662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:20.348 [2024-12-12 20:41:04.500668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.348 [2024-12-12 20:41:04.500684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:20.348 [2024-12-12 20:41:04.500691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:20.348 [2024-12-12 20:41:04.500697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:20.348 [2024-12-12 20:41:04.500702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.348 [2024-12-12 20:41:04.560730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:20.348 [2024-12-12 20:41:04.560770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:20.348 [2024-12-12 20:41:04.560779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:20.348 [2024-12-12 20:41:04.560785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.606 [2024-12-12 20:41:04.610109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:20.606 [2024-12-12 20:41:04.610147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:20.606 [2024-12-12 20:41:04.610155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:20.606 [2024-12-12 20:41:04.610162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.606 [2024-12-12 20:41:04.610216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:20.606 [2024-12-12 20:41:04.610224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:20.606 [2024-12-12 20:41:04.610230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:20.606 [2024-12-12 20:41:04.610236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.606 [2024-12-12 20:41:04.610278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:20.606 [2024-12-12 20:41:04.610293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:20.606 [2024-12-12 20:41:04.610299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:20.606 [2024-12-12 20:41:04.610304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.606 [2024-12-12 20:41:04.610377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:20.606 [2024-12-12 20:41:04.610384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:20.606 [2024-12-12 20:41:04.610391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:20.606 [2024-12-12 20:41:04.610396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.606 [2024-12-12 20:41:04.610438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:20.606 [2024-12-12 20:41:04.610445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:30:20.606 [2024-12-12 20:41:04.610453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:20.606 [2024-12-12 20:41:04.610460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.606 [2024-12-12 20:41:04.610488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:20.606 [2024-12-12 20:41:04.610495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:20.606 [2024-12-12 20:41:04.610500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:20.606 [2024-12-12 20:41:04.610506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.606 [2024-12-12 20:41:04.610536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:20.606 [2024-12-12 20:41:04.610546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:20.606 [2024-12-12 20:41:04.610551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:20.606 [2024-12-12 20:41:04.610557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.606 [2024-12-12 20:41:04.610644] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 197.914 ms, result 0 00:30:21.174 20:41:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:30:21.174 20:41:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:21.174 20:41:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:30:21.174 20:41:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:30:21.174 20:41:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:30:21.174 20:41:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:21.174 Remove shared memory files 00:30:21.174 20:41:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:30:21.174 20:41:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:30:21.174 20:41:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:30:21.174 20:41:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:30:21.174 20:41:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid84576 00:30:21.174 20:41:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:30:21.174 20:41:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:30:21.174 ************************************ 00:30:21.174 END TEST ftl_upgrade_shutdown 00:30:21.174 ************************************ 00:30:21.174 00:30:21.174 real 1m21.564s 00:30:21.174 user 1m52.943s 00:30:21.174 sys 0m17.340s 00:30:21.174 20:41:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:21.174 20:41:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:21.174 20:41:05 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:30:21.174 20:41:05 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:30:21.174 20:41:05 ftl -- ftl/ftl.sh@14 -- # killprocess 76809 00:30:21.174 20:41:05 ftl -- common/autotest_common.sh@954 -- # '[' -z 76809 ']' 00:30:21.174 20:41:05 ftl -- common/autotest_common.sh@958 -- # kill -0 76809 00:30:21.174 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (76809) - No such process 00:30:21.174 Process with pid 76809 is not found 00:30:21.174 20:41:05 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 76809 is not found' 00:30:21.174 20:41:05 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:30:21.174 20:41:05 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=85009 00:30:21.174 20:41:05 ftl -- ftl/ftl.sh@20 -- # waitforlisten 85009 00:30:21.174 20:41:05 ftl -- common/autotest_common.sh@835 -- # '[' -z 85009 ']' 00:30:21.174 20:41:05 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:21.174 20:41:05 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:21.174 20:41:05 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:21.174 20:41:05 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:21.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:21.174 20:41:05 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:21.174 20:41:05 ftl -- common/autotest_common.sh@10 -- # set +x 00:30:21.174 [2024-12-12 20:41:05.378767] Starting SPDK v25.01-pre git sha1 dc2db8405 / DPDK 24.03.0 initialization... 00:30:21.174 [2024-12-12 20:41:05.378889] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85009 ] 00:30:21.435 [2024-12-12 20:41:05.534745] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:21.435 [2024-12-12 20:41:05.613263] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:30:22.006 20:41:06 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:22.006 20:41:06 ftl -- common/autotest_common.sh@868 -- # return 0 00:30:22.006 20:41:06 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:30:22.264 nvme0n1 00:30:22.264 20:41:06 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:30:22.264 20:41:06 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:30:22.264 20:41:06 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:22.523 20:41:06 ftl -- ftl/common.sh@28 -- # stores=5c195810-a264-4a7f-ac05-79f6afc7a4c8 00:30:22.523 20:41:06 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:30:22.523 20:41:06 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5c195810-a264-4a7f-ac05-79f6afc7a4c8 00:30:22.781 20:41:06 ftl -- ftl/ftl.sh@23 -- # killprocess 85009 00:30:22.781 20:41:06 ftl -- common/autotest_common.sh@954 -- # '[' -z 85009 ']' 00:30:22.781 20:41:06 ftl -- common/autotest_common.sh@958 -- # kill -0 85009 00:30:22.781 20:41:06 ftl -- common/autotest_common.sh@959 -- # uname 00:30:22.781 20:41:06 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:22.781 20:41:06 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85009 00:30:22.781 killing process with pid 85009 00:30:22.781 20:41:06 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:22.781 20:41:06 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:22.781 20:41:06 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85009' 00:30:22.781 20:41:06 ftl -- common/autotest_common.sh@973 -- # kill 85009 00:30:22.781 20:41:06 ftl -- common/autotest_common.sh@978 -- # wait 85009 00:30:24.161 20:41:08 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:30:24.161 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:24.161 Waiting for block devices as requested 00:30:24.161 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:30:24.420 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:30:24.420 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:30:24.420 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:30:29.705 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:30:29.705 Remove shared memory files 00:30:29.705 20:41:13 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:30:29.705 20:41:13 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:30:29.705 20:41:13 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:30:29.705 20:41:13 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:30:29.705 20:41:13 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:30:29.705 20:41:13 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:30:29.705 20:41:13 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:30:29.705 00:30:29.705 real 12m14.913s 00:30:29.705 user 14m29.485s 00:30:29.705 sys 1m1.780s 00:30:29.705 20:41:13 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:29.705 ************************************ 00:30:29.705 20:41:13 ftl -- common/autotest_common.sh@10 -- # set +x 00:30:29.705 END TEST ftl 00:30:29.705 ************************************ 00:30:29.705 20:41:13 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:30:29.705 20:41:13 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:30:29.705 20:41:13 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:30:29.705 20:41:13 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:30:29.705 20:41:13 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:30:29.705 20:41:13 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:30:29.705 20:41:13 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:30:29.705 20:41:13 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:30:29.705 20:41:13 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:30:29.705 20:41:13 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:30:29.705 20:41:13 -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:29.705 20:41:13 -- common/autotest_common.sh@10 -- # set +x 00:30:29.705 20:41:13 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:30:29.705 20:41:13 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:30:29.705 20:41:13 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:30:29.705 20:41:13 -- common/autotest_common.sh@10 -- # set +x 00:30:31.087 INFO: APP EXITING 00:30:31.087 INFO: killing all VMs 00:30:31.087 INFO: killing vhost app 00:30:31.087 INFO: EXIT DONE 00:30:31.348 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:31.608 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:30:31.608 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:30:31.608 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:30:31.867 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:30:32.128 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:32.389 Cleaning 00:30:32.389 Removing: /var/run/dpdk/spdk0/config 00:30:32.389 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:30:32.389 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:30:32.389 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:30:32.389 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:30:32.389 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:30:32.389 Removing: /var/run/dpdk/spdk0/hugepage_info 00:30:32.389 Removing: /var/run/dpdk/spdk0 00:30:32.389 Removing: /var/run/dpdk/spdk_pid58745 00:30:32.389 Removing: /var/run/dpdk/spdk_pid58958 00:30:32.389 Removing: /var/run/dpdk/spdk_pid59176 00:30:32.389 Removing: /var/run/dpdk/spdk_pid59269 00:30:32.389 Removing: /var/run/dpdk/spdk_pid59314 00:30:32.389 Removing: /var/run/dpdk/spdk_pid59431 00:30:32.389 Removing: /var/run/dpdk/spdk_pid59449 00:30:32.389 Removing: /var/run/dpdk/spdk_pid59637 00:30:32.389 Removing: /var/run/dpdk/spdk_pid59735 00:30:32.389 Removing: /var/run/dpdk/spdk_pid59830 00:30:32.389 Removing: /var/run/dpdk/spdk_pid59937 00:30:32.389 Removing: /var/run/dpdk/spdk_pid60034 00:30:32.648 Removing: /var/run/dpdk/spdk_pid60073 00:30:32.648 Removing: /var/run/dpdk/spdk_pid60110 00:30:32.648 Removing: /var/run/dpdk/spdk_pid60180 00:30:32.648 Removing: /var/run/dpdk/spdk_pid60274 00:30:32.648 Removing: /var/run/dpdk/spdk_pid60706 00:30:32.648 Removing: /var/run/dpdk/spdk_pid60759 00:30:32.648 Removing: /var/run/dpdk/spdk_pid60822 00:30:32.648 Removing: /var/run/dpdk/spdk_pid60838 00:30:32.648 Removing: /var/run/dpdk/spdk_pid60940 00:30:32.648 Removing: /var/run/dpdk/spdk_pid60956 00:30:32.648 Removing: /var/run/dpdk/spdk_pid61047 00:30:32.648 Removing: /var/run/dpdk/spdk_pid61063 00:30:32.648 Removing: /var/run/dpdk/spdk_pid61116 00:30:32.648 Removing: /var/run/dpdk/spdk_pid61134 00:30:32.648 Removing: /var/run/dpdk/spdk_pid61193 00:30:32.648 Removing: /var/run/dpdk/spdk_pid61211 00:30:32.648 Removing: /var/run/dpdk/spdk_pid61365 00:30:32.648 Removing: /var/run/dpdk/spdk_pid61402 00:30:32.648 Removing: /var/run/dpdk/spdk_pid61491 00:30:32.648 Removing: /var/run/dpdk/spdk_pid61663 00:30:32.648 Removing: /var/run/dpdk/spdk_pid61747 00:30:32.648 Removing: /var/run/dpdk/spdk_pid61783 00:30:32.648 Removing: /var/run/dpdk/spdk_pid62212 00:30:32.648 Removing: /var/run/dpdk/spdk_pid62310 00:30:32.648 Removing: /var/run/dpdk/spdk_pid62434 00:30:32.648 Removing: /var/run/dpdk/spdk_pid62487 00:30:32.648 Removing: /var/run/dpdk/spdk_pid62513 00:30:32.648 Removing: /var/run/dpdk/spdk_pid62591 00:30:32.648 Removing: /var/run/dpdk/spdk_pid63213 00:30:32.648 Removing: /var/run/dpdk/spdk_pid63250 00:30:32.648 Removing: /var/run/dpdk/spdk_pid63704 00:30:32.648 Removing: /var/run/dpdk/spdk_pid63802 00:30:32.648 Removing: /var/run/dpdk/spdk_pid63918 00:30:32.648 Removing: /var/run/dpdk/spdk_pid63971 00:30:32.648 Removing: /var/run/dpdk/spdk_pid64002 00:30:32.648 Removing: /var/run/dpdk/spdk_pid64028 00:30:32.648 Removing: /var/run/dpdk/spdk_pid65867 00:30:32.648 Removing: /var/run/dpdk/spdk_pid65999 00:30:32.648 Removing: /var/run/dpdk/spdk_pid66003 00:30:32.648 Removing: /var/run/dpdk/spdk_pid66020 00:30:32.648 Removing: /var/run/dpdk/spdk_pid66060 00:30:32.648 Removing: /var/run/dpdk/spdk_pid66064 00:30:32.648 Removing: /var/run/dpdk/spdk_pid66076 00:30:32.648 Removing: /var/run/dpdk/spdk_pid66121 00:30:32.648 Removing: /var/run/dpdk/spdk_pid66125 00:30:32.648 Removing: /var/run/dpdk/spdk_pid66137 00:30:32.648 Removing: /var/run/dpdk/spdk_pid66182 00:30:32.648 Removing: /var/run/dpdk/spdk_pid66186 00:30:32.648 Removing: /var/run/dpdk/spdk_pid66198 00:30:32.648 Removing: /var/run/dpdk/spdk_pid67582 00:30:32.648 Removing: /var/run/dpdk/spdk_pid67679 00:30:32.648 Removing: /var/run/dpdk/spdk_pid69079 00:30:32.648 Removing: /var/run/dpdk/spdk_pid70830 00:30:32.648 Removing: /var/run/dpdk/spdk_pid70905 00:30:32.648 Removing: /var/run/dpdk/spdk_pid70980 00:30:32.648 Removing: /var/run/dpdk/spdk_pid71080 00:30:32.648 Removing: /var/run/dpdk/spdk_pid71177 00:30:32.648 Removing: /var/run/dpdk/spdk_pid71273 00:30:32.648 Removing: /var/run/dpdk/spdk_pid71341 00:30:32.648 Removing: /var/run/dpdk/spdk_pid71422 00:30:32.648 Removing: /var/run/dpdk/spdk_pid71526 00:30:32.648 Removing: /var/run/dpdk/spdk_pid71619 00:30:32.648 Removing: /var/run/dpdk/spdk_pid71715 00:30:32.648 Removing: /var/run/dpdk/spdk_pid71789 00:30:32.648 Removing: /var/run/dpdk/spdk_pid71864 00:30:32.648 Removing: /var/run/dpdk/spdk_pid71968 00:30:32.648 Removing: /var/run/dpdk/spdk_pid72060 00:30:32.648 Removing: /var/run/dpdk/spdk_pid72156 00:30:32.648 Removing: /var/run/dpdk/spdk_pid72224 00:30:32.648 Removing: /var/run/dpdk/spdk_pid72300 00:30:32.648 Removing: /var/run/dpdk/spdk_pid72404 00:30:32.648 Removing: /var/run/dpdk/spdk_pid72500 00:30:32.648 Removing: /var/run/dpdk/spdk_pid72597 00:30:32.648 Removing: /var/run/dpdk/spdk_pid72660 00:30:32.648 Removing: /var/run/dpdk/spdk_pid72740 00:30:32.648 Removing: /var/run/dpdk/spdk_pid72814 00:30:32.648 Removing: /var/run/dpdk/spdk_pid72889 00:30:32.648 Removing: /var/run/dpdk/spdk_pid72992 00:30:32.648 Removing: /var/run/dpdk/spdk_pid73083 00:30:32.648 Removing: /var/run/dpdk/spdk_pid73179 00:30:32.648 Removing: /var/run/dpdk/spdk_pid73253 00:30:32.648 Removing: /var/run/dpdk/spdk_pid73324 00:30:32.648 Removing: /var/run/dpdk/spdk_pid73396 00:30:32.648 Removing: /var/run/dpdk/spdk_pid73478 00:30:32.648 Removing: /var/run/dpdk/spdk_pid73581 00:30:32.648 Removing: /var/run/dpdk/spdk_pid73672 00:30:32.648 Removing: /var/run/dpdk/spdk_pid73810 00:30:32.648 Removing: /var/run/dpdk/spdk_pid74094 00:30:32.648 Removing: /var/run/dpdk/spdk_pid74125 00:30:32.648 Removing: /var/run/dpdk/spdk_pid74565 00:30:32.648 Removing: /var/run/dpdk/spdk_pid74747 00:30:32.648 Removing: /var/run/dpdk/spdk_pid74851 00:30:32.648 Removing: /var/run/dpdk/spdk_pid74967 00:30:32.648 Removing: /var/run/dpdk/spdk_pid75015 00:30:32.648 Removing: /var/run/dpdk/spdk_pid75039 00:30:32.648 Removing: /var/run/dpdk/spdk_pid75348 00:30:32.648 Removing: /var/run/dpdk/spdk_pid75403 00:30:32.648 Removing: /var/run/dpdk/spdk_pid75470 00:30:32.648 Removing: /var/run/dpdk/spdk_pid75863 00:30:32.648 Removing: /var/run/dpdk/spdk_pid76004 00:30:32.648 Removing: /var/run/dpdk/spdk_pid76809 00:30:32.648 Removing: /var/run/dpdk/spdk_pid76941 00:30:32.648 Removing: /var/run/dpdk/spdk_pid77114 00:30:32.648 Removing: /var/run/dpdk/spdk_pid77206 00:30:32.648 Removing: /var/run/dpdk/spdk_pid77504 00:30:32.648 Removing: /var/run/dpdk/spdk_pid77746 00:30:32.648 Removing: /var/run/dpdk/spdk_pid78077 00:30:32.648 Removing: /var/run/dpdk/spdk_pid78255 00:30:32.906 Removing: /var/run/dpdk/spdk_pid78342 00:30:32.906 Removing: /var/run/dpdk/spdk_pid78400 00:30:32.906 Removing: /var/run/dpdk/spdk_pid78492 00:30:32.906 Removing: /var/run/dpdk/spdk_pid78517 00:30:32.906 Removing: /var/run/dpdk/spdk_pid78570 00:30:32.906 Removing: /var/run/dpdk/spdk_pid78736 00:30:32.906 Removing: /var/run/dpdk/spdk_pid78961 00:30:32.906 Removing: /var/run/dpdk/spdk_pid79493 00:30:32.906 Removing: /var/run/dpdk/spdk_pid80293 00:30:32.906 Removing: /var/run/dpdk/spdk_pid80838 00:30:32.906 Removing: /var/run/dpdk/spdk_pid81530 00:30:32.906 Removing: /var/run/dpdk/spdk_pid81667 00:30:32.906 Removing: /var/run/dpdk/spdk_pid81755 00:30:32.906 Removing: /var/run/dpdk/spdk_pid82130 00:30:32.906 Removing: /var/run/dpdk/spdk_pid82185 00:30:32.906 Removing: /var/run/dpdk/spdk_pid82690 00:30:32.906 Removing: /var/run/dpdk/spdk_pid83242 00:30:32.906 Removing: /var/run/dpdk/spdk_pid84070 00:30:32.906 Removing: /var/run/dpdk/spdk_pid84182 00:30:32.906 Removing: /var/run/dpdk/spdk_pid84228 00:30:32.906 Removing: /var/run/dpdk/spdk_pid84284 00:30:32.906 Removing: /var/run/dpdk/spdk_pid84341 00:30:32.906 Removing: /var/run/dpdk/spdk_pid84399 00:30:32.906 Removing: /var/run/dpdk/spdk_pid84576 00:30:32.906 Removing: /var/run/dpdk/spdk_pid84658 00:30:32.906 Removing: /var/run/dpdk/spdk_pid84709 00:30:32.906 Removing: /var/run/dpdk/spdk_pid84771 00:30:32.906 Removing: /var/run/dpdk/spdk_pid84805 00:30:32.906 Removing: /var/run/dpdk/spdk_pid84893 00:30:32.906 Removing: /var/run/dpdk/spdk_pid85009 00:30:32.906 Clean 00:30:32.906 20:41:16 -- common/autotest_common.sh@1453 -- # return 0 00:30:32.906 20:41:16 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:30:32.906 20:41:16 -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:32.906 20:41:16 -- common/autotest_common.sh@10 -- # set +x 00:30:32.906 20:41:17 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:30:32.906 20:41:17 -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:32.906 20:41:17 -- common/autotest_common.sh@10 -- # set +x 00:30:32.906 20:41:17 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:30:32.906 20:41:17 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:30:32.906 20:41:17 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:30:32.906 20:41:17 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:30:32.906 20:41:17 -- spdk/autotest.sh@398 -- # hostname 00:30:32.906 20:41:17 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:30:33.164 geninfo: WARNING: invalid characters removed from testname! 00:31:00.164 20:41:41 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:00.729 20:41:44 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:02.639 20:41:46 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:05.206 20:41:49 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:07.112 20:41:51 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:09.010 20:41:52 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:10.911 20:41:55 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:31:10.911 20:41:55 -- spdk/autorun.sh@1 -- $ timing_finish 00:31:10.911 20:41:55 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:31:10.911 20:41:55 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:31:10.911 20:41:55 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:31:10.911 20:41:55 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:31:10.911 + [[ -n 5034 ]] 00:31:10.911 + sudo kill 5034 00:31:11.179 [Pipeline] } 00:31:11.193 [Pipeline] // timeout 00:31:11.198 [Pipeline] } 00:31:11.211 [Pipeline] // stage 00:31:11.216 [Pipeline] } 00:31:11.228 [Pipeline] // catchError 00:31:11.236 [Pipeline] stage 00:31:11.238 [Pipeline] { (Stop VM) 00:31:11.248 [Pipeline] sh 00:31:11.526 + vagrant halt 00:31:13.426 ==> default: Halting domain... 00:31:18.699 [Pipeline] sh 00:31:18.972 + vagrant destroy -f 00:31:20.872 ==> default: Removing domain... 00:31:21.815 [Pipeline] sh 00:31:22.093 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:31:22.103 [Pipeline] } 00:31:22.118 [Pipeline] // stage 00:31:22.123 [Pipeline] } 00:31:22.136 [Pipeline] // dir 00:31:22.141 [Pipeline] } 00:31:22.155 [Pipeline] // wrap 00:31:22.161 [Pipeline] } 00:31:22.173 [Pipeline] // catchError 00:31:22.183 [Pipeline] stage 00:31:22.185 [Pipeline] { (Epilogue) 00:31:22.198 [Pipeline] sh 00:31:22.478 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:31:29.069 [Pipeline] catchError 00:31:29.071 [Pipeline] { 00:31:29.083 [Pipeline] sh 00:31:29.361 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:31:29.361 Artifacts sizes are good 00:31:29.368 [Pipeline] } 00:31:29.382 [Pipeline] // catchError 00:31:29.392 [Pipeline] archiveArtifacts 00:31:29.399 Archiving artifacts 00:31:29.530 [Pipeline] cleanWs 00:31:29.541 [WS-CLEANUP] Deleting project workspace... 00:31:29.541 [WS-CLEANUP] Deferred wipeout is used... 00:31:29.547 [WS-CLEANUP] done 00:31:29.549 [Pipeline] } 00:31:29.564 [Pipeline] // stage 00:31:29.570 [Pipeline] } 00:31:29.591 [Pipeline] // node 00:31:29.596 [Pipeline] End of Pipeline 00:31:29.632 Finished: SUCCESS